threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "I came across a couple of places in the planner that are checking\nfor nonempty havingQual; but since these bits run after\nconst-simplification of the HAVING clause, that produces the wrong\nanswer for a constant-true HAVING clause (which'll be folded to\nempty). Correct code is to check root->hasHavingQual instead.\n\nThese mistakes only affect cost estimates, and they're sufficiently\ncorner cases that it'd be hard even to devise a reliable test case\nshowing a different plan choice. So I'm not very excited about this,\nand am thinking of committing only to HEAD.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 17 Oct 2022 17:37:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "havingQual vs hasHavingQual buglets"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 5:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I came across a couple of places in the planner that are checking\n> for nonempty havingQual; but since these bits run after\n> const-simplification of the HAVING clause, that produces the wrong\n> answer for a constant-true HAVING clause (which'll be folded to\n> empty). Correct code is to check root->hasHavingQual instead.\n\n\n+1. root->hasHavingQual is set before we do any expression\npreprocessing. It should be the right one to check with.\n\nThanks\nRichard\n\nOn Tue, Oct 18, 2022 at 5:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I came across a couple of places in the planner that are checking\nfor nonempty havingQual; but since these bits run after\nconst-simplification of the HAVING clause, that produces the wrong\nanswer for a constant-true HAVING clause (which'll be folded to\nempty). Correct code is to check root->hasHavingQual instead. +1. root->hasHavingQual is set before we do any expressionpreprocessing. It should be the right one to check with.ThanksRichard",
"msg_date": "Tue, 18 Oct 2022 08:47:35 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: havingQual vs hasHavingQual buglets"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 9:47 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Oct 18, 2022 at 5:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I came across a couple of places in the planner that are checking\n>> for nonempty havingQual; but since these bits run after\n>> const-simplification of the HAVING clause, that produces the wrong\n>> answer for a constant-true HAVING clause (which'll be folded to\n>> empty). Correct code is to check root->hasHavingQual instead.\n\nThe postgres_fdw bits would be my oversight. :-(\n\n> +1. root->hasHavingQual is set before we do any expression\n> preprocessing. It should be the right one to check with.\n\n+1 HEAD only seems reasonable.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 18 Oct 2022 18:23:49 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: havingQual vs hasHavingQual buglets"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> On Tue, Oct 18, 2022 at 9:47 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>> On Tue, Oct 18, 2022 at 5:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I came across a couple of places in the planner that are checking\n>>> for nonempty havingQual; but since these bits run after\n>>> const-simplification of the HAVING clause, that produces the wrong\n>>> answer for a constant-true HAVING clause (which'll be folded to\n>>> empty). Correct code is to check root->hasHavingQual instead.\n\n> The postgres_fdw bits would be my oversight. :-(\n\nNo worries --- I think the one in set_subquery_pathlist is probably\nmy fault :-(\n\n> +1 HEAD only seems reasonable.\n\nPushed that way; thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Oct 2022 10:46:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: havingQual vs hasHavingQual buglets"
}
] |
[
{
"msg_contents": "Hi,\n\nI have seen 2 patches registered in CF failing on Linux - Debian\nBullseye in wait_for_subscription_sync(). It seems like the tables\naren't being synced. I have not done any further analysis. I'm not\nsure if this issue is being discussed elsewhere.\n\n# Postmaster PID for node \"twoways\" is 50208\nWaiting for all subscriptions in \"twoways\" to synchronize data\n[14:12:43.092](198.391s) # poll_query_until timed out executing this query:\n# SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN\n('r', 's');\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for subscriber to synchronize data at t/100_bugs.pl line 147.\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6618623857917952/log/src/test/subscription/tmp_check/log/regress_log_100_bugs\nhttps://cirrus-ci.com/task/6618623857917952\nhttps://cirrus-ci.com/task/5764058174455808\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 11:46:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "CF Bot failure in wait_for_subscription_sync()"
},
{
"msg_contents": "On Tuesday, October 18, 2022 2:16 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> I have seen 2 patches registered in CF failing on Linux - Debian Bullseye in\r\n> wait_for_subscription_sync(). It seems like the tables aren't being synced. I\r\n> have not done any further analysis. I'm not sure if this issue is being discussed\r\n> elsewhere.\r\n> \r\n> # Postmaster PID for node \"twoways\" is 50208 Waiting for all subscriptions in\r\n> \"twoways\" to synchronize data\r\n> [14:12:43.092](198.391s) # poll_query_until timed out executing this query:\r\n> # SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r',\r\n> 's'); # expecting this output:\r\n> # t\r\n> # last actual query output:\r\n> # f\r\n> # with stderr:\r\n> timed out waiting for subscriber to synchronize data at t/100_bugs.pl line 147.\r\n> \r\n> https://api.cirrus-ci.com/v1/artifact/task/6618623857917952/log/src/test/sub\r\n> scription/tmp_check/log/regress_log_100_bugs\r\n> https://cirrus-ci.com/task/6618623857917952\r\n> https://cirrus-ci.com/task/5764058174455808\r\n\r\nThanks for reporting this. I am not sure about the root cause but just share\r\nsome initial analysis here.\r\n\r\nThis testcase waits for table sync to finish for both table \"t\" and table \"t2\".\r\nBut from the log, I can only see the log[1] related to the table sync of table\r\n\"t\". So it seems that the table sync worker for table \"t2\" was never started\r\ndue to some reason. I tried it locally but have not reproduced this yet.\r\n\r\n[1]---\r\n2022-10-17 10:16:37.216 UTC [48051][logical replication worker] LOG: logical replication table synchronization worker for subscription \"testsub\", table \"t\" has finished\r\n---\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Tue, 18 Oct 2022 09:27:32 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: CF Bot failure in wait_for_subscription_sync()"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 2:57 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, October 18, 2022 2:16 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I have seen 2 patches registered in CF failing on Linux - Debian Bullseye in\n> > wait_for_subscription_sync(). It seems like the tables aren't being synced. I\n> > have not done any further analysis. I'm not sure if this issue is being discussed\n> > elsewhere.\n> >\n> > # Postmaster PID for node \"twoways\" is 50208 Waiting for all subscriptions in\n> > \"twoways\" to synchronize data\n> > [14:12:43.092](198.391s) # poll_query_until timed out executing this query:\n> > # SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r',\n> > 's'); # expecting this output:\n> > # t\n> > # last actual query output:\n> > # f\n> > # with stderr:\n> > timed out waiting for subscriber to synchronize data at t/100_bugs.pl line 147.\n> >\n> > https://api.cirrus-ci.com/v1/artifact/task/6618623857917952/log/src/test/sub\n> > scription/tmp_check/log/regress_log_100_bugs\n> > https://cirrus-ci.com/task/6618623857917952\n> > https://cirrus-ci.com/task/5764058174455808\n>\n> Thanks for reporting this. I am not sure about the root cause but just share\n> some initial analysis here.\n>\n> This testcase waits for table sync to finish for both table \"t\" and table \"t2\".\n> But from the log, I can only see the log[1] related to the table sync of table\n> \"t\". So it seems that the table sync worker for table \"t2\" was never started\n> due to some reason.\n>\n\nYeah, the reason is not clear to me either. Let me state my\nunderstanding of the issue. IIUC, the following test in 100_bugs.pl\nhas failed:\n$node_twoways->safe_psql('d1', 'ALTER PUBLICATION testpub ADD TABLE t2');\n$node_twoways->safe_psql('d2',\n'ALTER SUBSCRIPTION testsub REFRESH PUBLICATION');\n...\n...\n$node_twoways->wait_for_subscription_sync($node_twoways, 'testsub', 'd2');\n\nAfter the REFRESH operation, the new table should have been added to\npg_subscription_rel. This is also visible from LOGS because one of\ntables has synced and for table another worker is not started. Now,\nideally, after the new entry is created in pg_subscription_rel, the\napply worker should have got an invalidation message and refreshed the\n'table_states_not_ready' list which should have let it start the new\ntable sync worker for table 't2' but that is not happening here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Oct 2022 18:34:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CF Bot failure in wait_for_subscription_sync()"
}
] |
[
{
"msg_contents": "Hi,\n\nIn standby mode, the state machine in WaitForWALToBecomeAvailable()\nreads WAL from pg_wal after failing to read from the archive. This is\ncurrently implemented in XLogFileReadAnyTLI() by calling\nXLogFileRead() with source XLOG_FROM_PG_WAL after it fails with source\nXLOG_FROM_PG_ARCHIVE and the current source isn't changed at all.\nAlso, passing the source to XLogFileReadAnyTLI() in\nWaitForWALToBecomeAvailable() isn't straight i.e. it's not necessary\nto pass in XLOG_FROM_ANY at all. These things make the state machine a\nbit complicated and hard to understand.\n\nThe attached patch attempts to simplify the code a bit by changing the\ncurrent source to XLOG_FROM_PG_WAL after failing in\nXLOG_FROM_PG_ARCHIVE so that the state machine can move smoothly to\nread from pg_wal. And we can just pass the current source to\nXLogFileReadAnyTLI(). It also enables us to reduce a bit of extra\nXLogFileRead() code in XLogFileReadAnyTLI().\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 18 Oct 2022 12:01:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Simplify standby state machine a bit in WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 12:01 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> In standby mode, the state machine in WaitForWALToBecomeAvailable()\n> reads WAL from pg_wal after failing to read from the archive. This is\n> currently implemented in XLogFileReadAnyTLI() by calling\n> XLogFileRead() with source XLOG_FROM_PG_WAL after it fails with source\n> XLOG_FROM_PG_ARCHIVE and the current source isn't changed at all.\n> Also, passing the source to XLogFileReadAnyTLI() in\n> WaitForWALToBecomeAvailable() isn't straight i.e. it's not necessary\n> to pass in XLOG_FROM_ANY at all. These things make the state machine a\n> bit complicated and hard to understand.\n>\n> The attached patch attempts to simplify the code a bit by changing the\n> current source to XLOG_FROM_PG_WAL after failing in\n> XLOG_FROM_PG_ARCHIVE so that the state machine can move smoothly to\n> read from pg_wal. And we can just pass the current source to\n> XLogFileReadAnyTLI(). It also enables us to reduce a bit of extra\n> XLogFileRead() code in XLogFileReadAnyTLI().\n>\n> Thoughts?\n\n+1\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 18 Oct 2022 13:02:30 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 1:03 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 12:01 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In standby mode, the state machine in WaitForWALToBecomeAvailable()\n> > reads WAL from pg_wal after failing to read from the archive. This is\n> > currently implemented in XLogFileReadAnyTLI() by calling\n> > XLogFileRead() with source XLOG_FROM_PG_WAL after it fails with source\n> > XLOG_FROM_PG_ARCHIVE and the current source isn't changed at all.\n> > Also, passing the source to XLogFileReadAnyTLI() in\n> > WaitForWALToBecomeAvailable() isn't straight i.e. it's not necessary\n> > to pass in XLOG_FROM_ANY at all. These things make the state machine a\n> > bit complicated and hard to understand.\n> >\n> > The attached patch attempts to simplify the code a bit by changing the\n> > current source to XLOG_FROM_PG_WAL after failing in\n> > XLOG_FROM_PG_ARCHIVE so that the state machine can move smoothly to\n> > read from pg_wal. And we can just pass the current source to\n> > XLogFileReadAnyTLI(). It also enables us to reduce a bit of extra\n> > XLogFileRead() code in XLogFileReadAnyTLI().\n> >\n> > Thoughts?\n>\n> +1\n\nThanks. Let's see what others think about it. I will add a CF entry in a while.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Oct 2022 09:42:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 12:01:07PM +0530, Bharath Rupireddy wrote:\n> The attached patch attempts to simplify the code a bit by changing the\n> current source to XLOG_FROM_PG_WAL after failing in\n> XLOG_FROM_PG_ARCHIVE so that the state machine can move smoothly to\n> read from pg_wal. And we can just pass the current source to\n> XLogFileReadAnyTLI(). It also enables us to reduce a bit of extra\n> XLogFileRead() code in XLogFileReadAnyTLI().\n\nThis looks correct to me. The only thing that stood out to me was the loop\nthrough 'tles' in XLogFileReadyAnyTLI. With this change, we'd loop through\nthe timelines for both XLOG_FROM_PG_ARCHIVE and XLOG_FROM_PG_WAL, whereas\nnow we only loop through the timelines once. However, I doubt this makes\nmuch difference in practice. You'd only do the extra loop whenever\nrestoring from the archives failed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Dec 2022 10:32:57 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 10:32:57AM -0800, Nathan Bossart wrote:\n> This looks correct to me. The only thing that stood out to me was the loop\n> through 'tles' in XLogFileReadyAnyTLI. With this change, we'd loop through\n> the timelines for both XLOG_FROM_PG_ARCHIVE and XLOG_FROM_PG_WAL, whereas\n> now we only loop through the timelines once. However, I doubt this makes\n> much difference in practice. You'd only do the extra loop whenever\n> restoring from the archives failed.\n\n case XLOG_FROM_ARCHIVE:\n+\n+ /*\n+ * After failing to read from archive, we try to read from\n+ * pg_wal.\n+ */\n+ currentSource = XLOG_FROM_PG_WAL;\n+ break;\nIn standby mode, the priority lookup order is pg_wal -> archive ->\nstream. With this change, we would do pg_wal -> archive -> pg_wal ->\nstream, meaning that it could influence some recovery scenarios while\ninvolving more lookups than necessary to the local pg_wal/ directory?\n\nSee, on failure where the current source is XLOG_FROM_ARCHIVE, we\nwould not switch anymore directly to XLOG_FROM_STREAM.\n--\nMichael",
"msg_date": "Tue, 3 Jan 2023 11:17:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 7:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 30, 2022 at 10:32:57AM -0800, Nathan Bossart wrote:\n> > This looks correct to me. The only thing that stood out to me was the loop\n> > through 'tles' in XLogFileReadyAnyTLI. With this change, we'd loop through\n> > the timelines for both XLOG_FROM_PG_ARCHIVE and XLOG_FROM_PG_WAL, whereas\n> > now we only loop through the timelines once. However, I doubt this makes\n> > much difference in practice. You'd only do the extra loop whenever\n> > restoring from the archives failed.\n>\n> case XLOG_FROM_ARCHIVE:\n> +\n> + /*\n> + * After failing to read from archive, we try to read from\n> + * pg_wal.\n> + */\n> + currentSource = XLOG_FROM_PG_WAL;\n> + break;\n> In standby mode, the priority lookup order is pg_wal -> archive ->\n> stream. With this change, we would do pg_wal -> archive -> pg_wal ->\n> stream, meaning that it could influence some recovery scenarios while\n> involving more lookups than necessary to the local pg_wal/ directory?\n>\n> See, on failure where the current source is XLOG_FROM_ARCHIVE, we\n> would not switch anymore directly to XLOG_FROM_STREAM.\n\nI think there's a bit of disconnect here - here's what I understand:\n\nStandby when started can either enter to crash recovery (if it is a\nrestart after crash) or enter to archive recovery directly.\n\nThe standby, when in crash recovery:\ncurrentSource is set to XLOG_FROM_PG_WAL in\nWaitForWALToBecomeAvailable() and it continues to exhaust replaying\nall the WAL in the pg_wal directory.\nAfter all the pg_wal is exhausted during crash recovery, currentSource\nis set to XLOG_FROM_ANY in ReadRecord() and the standby enters archive\nrecovery mode (see below).\n\nThe standby, when in archive recovery:\nIn WaitForWALToBecomeAvailable() currentSource is set to\nXLOG_FROM_ARCHIVE and it enters XLogFileReadAnyTLI() - first tries to\nfetch WAL from archive and returns if succeeds otherwise tries to\nfetch from pg_wal and returns if succeeds, otherwise returns with\nfailure.\nIf failure is returned from XLogFileReadAnyTLI(), change the\ncurrentSource to XLOG_FROM_STREAM.\nIf a failure in XLOG_FROM_STREAM, the currentSource is set to\nXLOG_FROM_ARCHIVE and XLogFileReadAnyTLI() is called again.\n\nNote that the standby exits from this WaitForWALToBecomeAvailable()\nstate machine when the promotion signal is detected and before which\nall the wal from archive -> pg_wal is exhausted.\n\nNote that currentSource is set to XLOG_FROM_PG_WAL in\nWaitForWALToBecomeAvailable() only after the server exits archive\nrecovery i.e. InArchiveRecovery is set to false in\nFinishWalRecovery(). However, exhausting pg_wal for recovery is built\ninherently within XLogFileReadAnyTLI().\n\nIn summary:\nthe flow when the standby is in crash recovery is pg_wal -> [archive\n-> pg_wal -> stream] -> [archive -> pg_wal -> stream] -> [] -> [] ...\nthe flow when the standby is in archive recovery is [archive -> pg_wal\n-> stream] -> [archive -> pg_wal -> stream] -> [] -> [] ...\n\nThe proposed patch makes the inherent state change to pg_wal after\nfailure to read from archive in XLogFileReadAnyTLI() to explicit by\nsetting currentSource to XLOG_FROM_PG_WAL in the state machine. I\nthink it doesn't alter the existing state machine or add any new extra\nlookups in pg_wal.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 14:53:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 12:03 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 12:01:07PM +0530, Bharath Rupireddy wrote:\n> > The attached patch attempts to simplify the code a bit by changing the\n> > current source to XLOG_FROM_PG_WAL after failing in\n> > XLOG_FROM_PG_ARCHIVE so that the state machine can move smoothly to\n> > read from pg_wal. And we can just pass the current source to\n> > XLogFileReadAnyTLI(). It also enables us to reduce a bit of extra\n> > XLogFileRead() code in XLogFileReadAnyTLI().\n>\n> This looks correct to me. The only thing that stood out to me was the loop\n> through 'tles' in XLogFileReadyAnyTLI. With this change, we'd loop through\n> the timelines for both XLOG_FROM_PG_ARCHIVE and XLOG_FROM_PG_WAL, whereas\n> now we only loop through the timelines once. However, I doubt this makes\n> much difference in practice. You'd only do the extra loop whenever\n> restoring from the archives failed.\n\nRight. With the patch, we'd loop again through the tles after a\nfailure from the archive. Since the curFileTLI isn't changed unless a\nsuccessful read, we'd read from pg_wal with tli where we earlier left\noff reading from the archive. I'm not sure if this extra looping is\nworth worrying about.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 15:22:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 02:53:10PM +0530, Bharath Rupireddy wrote:\n> In summary:\n> the flow when the standby is in crash recovery is pg_wal -> [archive\n> -> pg_wal -> stream] -> [archive -> pg_wal -> stream] -> [] -> [] ...\n> the flow when the standby is in archive recovery is [archive -> pg_wal\n> -> stream] -> [archive -> pg_wal -> stream] -> [] -> [] ...\n\nThis is my understanding as well.\n \n> The proposed patch makes the inherent state change to pg_wal after\n> failure to read from archive in XLogFileReadAnyTLI() to explicit by\n> setting currentSource to XLOG_FROM_PG_WAL in the state machine. I\n> think it doesn't alter the existing state machine or add any new extra\n> lookups in pg_wal.\n\nI'm assuming this change would simplify your other patch that modifieѕ\nWaitForWALToBecomeAvailable() [0]. Is that correct?\n\n[0] https://commitfest.postgresql.org/41/3663/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 10:33:24 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 12:03 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Jan 03, 2023 at 02:53:10PM +0530, Bharath Rupireddy wrote:\n> > In summary:\n> > the flow when the standby is in crash recovery is pg_wal -> [archive\n> > -> pg_wal -> stream] -> [archive -> pg_wal -> stream] -> [] -> [] ...\n> > the flow when the standby is in archive recovery is [archive -> pg_wal\n> > -> stream] -> [archive -> pg_wal -> stream] -> [] -> [] ...\n>\n> This is my understanding as well.\n>\n> > The proposed patch makes the inherent state change to pg_wal after\n> > failure to read from archive in XLogFileReadAnyTLI() to explicit by\n> > setting currentSource to XLOG_FROM_PG_WAL in the state machine. I\n> > think it doesn't alter the existing state machine or add any new extra\n> > lookups in pg_wal.\n>\n> I'm assuming this change would simplify your other patch that modifieѕ\n> WaitForWALToBecomeAvailable() [0]. Is that correct?\n>\n> [0] https://commitfest.postgresql.org/41/3663/\n\nYes, it does simplify the other feature patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 18:54:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 02:53:10PM +0530, Bharath Rupireddy wrote:\n> The proposed patch makes the inherent state change to pg_wal after\n> failure to read from archive in XLogFileReadAnyTLI() to explicit by\n> setting currentSource to XLOG_FROM_PG_WAL in the state machine. I\n> think it doesn't alter the existing state machine or add any new extra\n> lookups in pg_wal.\n\nWell, did you notice 4d894b41? It introduced this change:\n\n- readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2, currentSource);\n+ readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2,\n+ currentSource == XLOG_FROM_ARCHIVE ? XLOG_FROM_ANY :\n+ currentSource);\n\nAnd this patch basically undoes that, meaning that we would basically\nlook at the archives first for all the expected TLIs, but only if no\nfiles were found in pg_wal/.\n\nThe change is subtle, see XLogFileReadAnyTLI(). On HEAD we go through\neach timeline listed and check both archives and then pg_wal/ after\nthe last source that failed was the archives. The patch does\nsomething different: it checks all the timelines for the archives,\nthen all the timelines in pg_wal/ with two separate calls to\nXLogFileReadAnyTLI().\n--\nMichael",
"msg_date": "Wed, 1 Mar 2023 17:15:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 1:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 03, 2023 at 02:53:10PM +0530, Bharath Rupireddy wrote:\n> > The proposed patch makes the inherent state change to pg_wal after\n> > failure to read from archive in XLogFileReadAnyTLI() to explicit by\n> > setting currentSource to XLOG_FROM_PG_WAL in the state machine. I\n> > think it doesn't alter the existing state machine or add any new extra\n> > lookups in pg_wal.\n>\n> Well, did you notice 4d894b41? It introduced this change:\n>\n> - readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2, currentSource);\n> + readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2,\n> + currentSource == XLOG_FROM_ARCHIVE ? XLOG_FROM_ANY :\n> + currentSource);\n>\n> And this patch basically undoes that, meaning that we would basically\n> look at the archives first for all the expected TLIs, but only if no\n> files were found in pg_wal/.\n>\n> The change is subtle, see XLogFileReadAnyTLI(). On HEAD we go through\n> each timeline listed and check both archives and then pg_wal/ after\n> the last source that failed was the archives. The patch does\n> something different: it checks all the timelines for the archives,\n> then all the timelines in pg_wal/ with two separate calls to\n> XLogFileReadAnyTLI().\n\nThanks. Yeah, the patch proposed here just reverts that commit [1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4d894b41cd12179b710526eba9dc62c2b99abc4d.\n\nThat commit fixes an issue - \"If there is a WAL segment with same ID\nbut different TLI present in both the WAL archive and pg_xlog, prefer\nthe one with higher TLI.\".\n\nI will withdraw the patch proposed in this thread.\n\n[1]\ncommit 4d894b41cd12179b710526eba9dc62c2b99abc4d\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: Fri Feb 14 15:15:09 2014 +0200\n\n Change the order that pg_xlog and WAL archive are polled for WAL segments.\n\n If there is a WAL segment with same ID but different TLI present in both\n the WAL archive and pg_xlog, prefer the one with higher TLI. Before this\n patch, the archive was polled first, for all expected TLIs, and only if no\n file was found was pg_xlog scanned. This was a change in behavior from 9.3,\n which first scanned archive and pg_xlog for the highest TLI, then archive\n and pg_xlog for the next highest TLI and so forth. This patch reverts the\n behavior back to what it was in 9.2.\n\n The reason for this is that if for example you try to do archive recovery\n to timeline 2, which branched off timeline 1, but the WAL for timeline 2 is\n not archived yet, we would replay past the timeline switch point on\n timeline 1 using the archived files, before even looking timeline 2's files\n in pg_xlog\n\n Report and patch by Kyotaro Horiguchi. Backpatch to 9.3 where the behavior\n was changed.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 3 Mar 2023 13:38:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 01:38:38PM +0530, Bharath Rupireddy wrote:\n> I will withdraw the patch proposed in this thread.\n\nOkay, thanks for confirming.\n--\nMichael",
"msg_date": "Fri, 3 Mar 2023 20:13:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 01:38:38PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 1, 2023 at 1:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Well, did you notice 4d894b41? It introduced this change:\n>>\n>> - readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2, currentSource);\n>> + readFile = XLogFileReadAnyTLI(readSegNo, DEBUG2,\n>> + currentSource == XLOG_FROM_ARCHIVE ? XLOG_FROM_ANY :\n>> + currentSource);\n>>\n>> And this patch basically undoes that, meaning that we would basically\n>> look at the archives first for all the expected TLIs, but only if no\n>> files were found in pg_wal/.\n>>\n>> The change is subtle, see XLogFileReadAnyTLI(). On HEAD we go through\n>> each timeline listed and check both archives and then pg_wal/ after\n>> the last source that failed was the archives. The patch does\n>> something different: it checks all the timelines for the archives,\n>> then all the timelines in pg_wal/ with two separate calls to\n>> XLogFileReadAnyTLI().\n> \n> Thanks. Yeah, the patch proposed here just reverts that commit [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=4d894b41cd12179b710526eba9dc62c2b99abc4d.\n> \n> That commit fixes an issue - \"If there is a WAL segment with same ID\n> but different TLI present in both the WAL archive and pg_xlog, prefer\n> the one with higher TLI.\".\n\nGiven both Bharath and I missed this, perhaps we should add a comment about\nthis behavior.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 3 Mar 2023 16:33:39 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 04:33:39PM -0800, Nathan Bossart wrote:\n> Given both Bharath and I missed this, perhaps we should add a comment about\n> this behavior.\n\nMakes sense to me to document that in a better way. What about the\naddition of a short paragraph at the top of XLogFileReadAnyTLI() that\nexplains the behaviors we expect depending on the values of\nXLogSource? The case of XLOG_FROM_ANY with the order to scan the\narchives then pg_wal/ for each timeline is the most important bit,\nsurely.\n--\nMichael",
"msg_date": "Sat, 4 Mar 2023 11:30:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 8:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 03, 2023 at 04:33:39PM -0800, Nathan Bossart wrote:\n> > Given both Bharath and I missed this, perhaps we should add a comment about\n> > this behavior.\n>\n> Makes sense to me to document that in a better way. What about the\n> addition of a short paragraph at the top of XLogFileReadAnyTLI() that\n> explains the behaviors we expect depending on the values of\n> XLogSource? The case of XLOG_FROM_ANY with the order to scan the\n> archives then pg_wal/ for each timeline is the most important bit,\n> surely.\n\n+1. I will send a patch in a bit.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 4 Mar 2023 08:14:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 8:14 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Mar 4, 2023 at 8:00 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Mar 03, 2023 at 04:33:39PM -0800, Nathan Bossart wrote:\n> > > Given both Bharath and I missed this, perhaps we should add a comment about\n> > > this behavior.\n> >\n> > Makes sense to me to document that in a better way. What about the\n> > addition of a short paragraph at the top of XLogFileReadAnyTLI() that\n> > explains the behaviors we expect depending on the values of\n> > XLogSource? The case of XLOG_FROM_ANY with the order to scan the\n> > archives then pg_wal/ for each timeline is the most important bit,\n> > surely.\n>\n> +1. I will send a patch in a bit.\n\nOkay, here's a patch attached.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 4 Mar 2023 09:47:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Sat, Mar 04, 2023 at 09:47:05AM +0530, Bharath Rupireddy wrote:\n> Okay, here's a patch attached.\n\nThanks.\n\n+ * When source == XLOG_FROM_ANY, this function first searches for the segment\n+ * with a TLI in archive first, if not found, it searches in pg_wal. This way,\n+ * if there is a WAL segment with same passed-in segno but different TLI\n+ * present in both the archive and pg_wal, it prefers the one with higher TLI.\n+ * The reason for this is that if for example we try to do archive recovery to\n+ * timeline 2, which branched off timeline 1, but the WAL for timeline 2 is not\n+ * archived yet, we would replay past the timeline switch point on timeline 1\n+ * using the archived WAL segment, before even looking timeline 2's WAL\n+ * segments in pg_wal.\n\nThis is pretty much what the commit has mentioned. The first half\nprovides enough details, IMO.\n--\nMichael",
"msg_date": "Mon, 6 Mar 2023 16:56:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 1:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Mar 04, 2023 at 09:47:05AM +0530, Bharath Rupireddy wrote:\n> > Okay, here's a patch attached.\n>\n> Thanks.\n>\n> + * When source == XLOG_FROM_ANY, this function first searches for the segment\n> + * with a TLI in archive first, if not found, it searches in pg_wal. This way,\n> + * if there is a WAL segment with same passed-in segno but different TLI\n> + * present in both the archive and pg_wal, it prefers the one with higher TLI.\n> + * The reason for this is that if for example we try to do archive recovery to\n> + * timeline 2, which branched off timeline 1, but the WAL for timeline 2 is not\n> + * archived yet, we would replay past the timeline switch point on timeline 1\n> + * using the archived WAL segment, before even looking timeline 2's WAL\n> + * segments in pg_wal.\n>\n> This is pretty much what the commit has mentioned. The first half\n> provides enough details, IMO.\n\nIMO, mentioning the example from the commit message in the function\ncomment makes things more clear - one doesn't have to go look for the\ncommit message for that.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 13:47:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Simplify standby state machine a bit in\n WaitForWALToBecomeAvailable()"
}
] |
[
{
"msg_contents": "Hi,\n\nAs a card-carrying Unix hacker, I think it'd be great to remove the\ndifferences between platforms if possible using newer Windows\nfacilities, so everything just works the way we expect. Two things\nthat stopped progress on that front in the past were (1) uncertainty\nabout OS versioning, fixed in v16 with an EOL purge, and (2)\nuncertainty about what the new interfaces really do, due to lack of\ngood ways to test, which I'd like to address here.\n\nMy goals in this thread:\n\n * introduce a pattern/idiom for TAP-testing of low level C code\nwithout a database server\n * demonstrate the behaviour of our filesystem porting code with full coverage\n * fix reported bugs in my recent symlink changes with coverage\n * understand the new \"POSIX semantics\" changes in Windows 10\n * figure out what our policy should be on \"POSIX semantics\"\n\nFor context, we have a bunch of stuff under src/port to provide\nPOSIX-like implementations of:\n\n open()*\n fstat(), stat()*, lstat()*\n link(), unlink()*, rename()*\n symlink(), readlink()\n opendir(), readdir(), closedir()\n pg_pwrite(), pg_pread()\n\nThese call equivalent Windows system interfaces so we can mostly just\nwrite code that assumes the whole world is a POSIX. Some of them also\ndeal with three special aspects of Windows file handles, which\noccasionally cause trouble:\n\n1. errno == EACCES && GetLastError() == ERROR_SHARING_VIOLATION:\nThis happens if you try to access a file that has been opened by\nwithout FILE_SHARE_ flags to allow concurrent access. While our own\nopen() wrapper uses those flags, other programs might not. The\nwrapper functions marked with an asterix above deal with this\ncondition by sleeping or retrying for 10 or 30 seconds, in the hope\nthat the external program goes away. AFAIK this problem will always\nbe with us.\n\n2. errno == EACCES && GetLastNtStatus() == STATUS_DELETE_PENDING:\nThis happens if you try to access a directory entry that is scheduled\nfor asynchronous unlink, but is still present until all handles to the\nfile are closed. The wrapper functions above deal with this in\nvarious different ways:\n\n open() without O_CREAT: -> ENOENT, so we can pretend that unlink()\ncalls are synchronous\n open() with O_CREAT: -> EEXIST, the zombie dirent wins\n stat(), lstat(): -> ENOENT\n unlink(), rename(): retry, same as we do for ERROR_SHARING_VIOLATION,\nuntil timeout or asynchonous unlink completes (this may have been\nunintentional due to same errno?)\n\n3. errno == EACCESS && <not sure>: You can't MoveFileEx() on top of\na file that someone has open.\n\nIn Windows 10, a new \"POSIX semantics\" mode was added. Yippee!\nVictor Spirin proposed[1] that we use it several commitfests ago.\nInterestingly, on some systems it is already partially activated\nwithout any change on our part. That is, on some systems, unlink()\nworks synchronously (when the call returns, the dirent is gone, even\nif someone else has the file open, just like Unix). Sounds great, but\nin testing different Windows systems I have access to using the\nattached test suite I found three different sets of behaviour:\n\n A) Using Windows unlink() and MoveFileEx() on Server 2019 (CI) I get\ntraditional STATUS_DELETE_PENDING problems\n B) Using Windows unlink()/MoveFileEx() on Windows 10 Home (local VM)\nI get mostly POSIX behaviour, except problem (3) above, which you can\nsee in my test suite\n C) Using Windows new SetFileInformationByHandle() calls with explicit\nrequest for POSIX semantics (this syscall is something like fcntl(), a\nkitchen sink kernel interface, and is what unlink() and MoveFileEx()\nand others things are built on, but if you do it yourself you can\nreach more flags) I get full POSIX behaviour according to my test\nsuite, i.e. agreement with FreeBSD and Linux for the dirent-related\ncases I've though about so far, on both of those Windows systems\n\nIt sounds like we want option C, as Victor proposed, but I'm not sure\nwhat happens if you try to use it on a non-NTFS filesystem (does it\nquietly fall back to non-POSIX semantics, or fail, or do all file\nsystems now support this?). I'm also not sure if we really support\nrunning on a non-NTFS filesystem, not being a Windows user myself.\n\nSo the questions I have are:\n\n * any thoughts on this C TAP testing system?\n * has anyone got a non-EOL'd OS version where this test suite fails?\n * has anyone got a relevant filesystem where this fails? which way\ndo ReFS and SMB go? do the new calls in 0010 just fail, and if so\nwith which code (ie could we add our own fallback path)?\n * which filesystems do we even claim to support?\n * if we switched to explicitly using POSIX-semantics like in the 0010\npatch, I assume there would be nothing left in the build farm or CI\nthat tests the non-POSIX code paths (either in these tests or in the\nreal server code), and the non-POSIX support would decay into\nnon-working form pretty quickly\n * if there are any filesystems that don't support POSIX-semantics,\nwould we want to either (1) get such a thing into the build farm so\nit's tested or (2) de-support non-POSIX-semantics filesystems by\nedict, and drop a lot of code and problems that everyone hates?\n\nThanks to Andres for the 0002 meson support. I have not yet written\nautoconf support; I guess I'd have to do that.\n\nYou can run this with eg \"meson test --suite=port -v\" on any OS. The\nfirst test result tells you whether it detected POSIX semantics or\nnot, which affects later testing. Unix systems are always detected as\nPOSIX, Windows 10+ systems are always POSIX if you apply the final\npatch, but could be either depending on your Windows version if you\ndon't, except they still have the quirk about problem (3) above for\nsome reason, which is why the relevant test changes in the final\npatch.\n\n(Note: The 0010 patch fails on the CI CompilerCheck cross build, which\nI think has to do with wanting _WIN32_WINNT >= 0xA02 to see some\ndefinitions, not looked into yet, and I haven't thought much about\nCygwin, but I expect they turn on all the POSIX things under the\ncovers too.)\n\n[1] https://commitfest.postgresql.org/40/3347/",
"msg_date": "Tue, 18 Oct 2022 22:00:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 10:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> * has anyone got a relevant filesystem where this fails? which way\n> do ReFS and SMB go? do the new calls in 0010 just fail, and if so\n> with which code (ie could we add our own fallback path)?\n\nAndres kindly ran these tests on some Win 10 and Win 11 VMs he had\nwith non-NTFS filesystems, so I can report:\n\nNTFS: have_posix_unlink_semantics == true, tests passing\n\nReFS: have_posix_unlink_semantics == false, tests passing\n\nSMB: have_posix_unlink_semantics == false, symlink related tests\nfailing (our junction points are rejected) + one readdir() test\nfailing (semantic difference introduced by SMB, it can't see\nSTATUS_DELETE_PENDING zombies).\n\nI think this means that PostgreSQL probably mostly works on SMB today,\nexcept you can't create tablespaces, and therefore our regression\ntests etc already can't pass there, and there may be a few extra\nENOTEMPTY race conditions due to readdir()'s different behaviour.\n\n> * if there are any filesystems that don't support POSIX-semantics,\n> would we want to either (1) get such a thing into the build farm so\n> it's tested or (2) de-support non-POSIX-semantics filesystems by\n> edict, and drop a lot of code and problems that everyone hates?\n\nYes, yes there are, so this question comes up. Put another way:\n\nI guess that almost all users of PostgreSQL on Windows are using NTFS.\nSome are getting partial POSIX semantics already, and some are not,\ndepending on the Windows variant. If we commit the 0010 patch, all\nsupported OSes will get full POSIX unlink semantics on NTFS. That'd\nleave just ReFS and SMB users (are there any other relevant\nfilesystems?) in the cold with non-POSIX semantics. Do we want to\nclaim that we support those filesystems? If so, I guess we'd need an\nanimal and perhaps also optional CI with ReFS. (Though ReFS may\neventually get POSIX semantics too, I have no idea about that.) If\nnot, we could in theory rip out various code we have to cope with the\nnon-POSIX unlink semantics, and completely forget about that whole\ncategory of problem.\n\nChanges in this version:\n* try to avoid tests that do bad things that crash if earlier tests\nfailed (I learned that close(-1) aborts in debug builds)\n* add fallback paths in 0010 (I learned what errors are raised on lack\nof POSIX support)\n* fix MinGW build problems\n\nAs far as I could tell, MinGW doesn't have a struct definition we\nneed, and it seems to want _WIN32_WINNT >= 0x0A000002 to see\nFileRenameInfoEx, which looks weird to me... (I'm not sure about that,\nbut I think that was perhaps supposed to be 0x0A02, but even that\nisn't necessary with MSVC SDK headers). I gave up researching that\nand put the definitions I needed into the code.",
"msg_date": "Thu, 20 Oct 2022 17:54:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "I pushed the bug fixes from this series, without their accompanying\ntests. Here's a rebase of the test suite, with all those tests now\nsquashed into the main test patch, and also the\ntell-Windows-to-be-more-like-Unix patch. Registered in the\ncommitfest.",
"msg_date": "Tue, 25 Oct 2022 17:11:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "2022年10月25日(火) 13:12 Thomas Munro <thomas.munro@gmail.com>:\n>\n> I pushed the bug fixes from this series, without their accompanying\n> tests. Here's a rebase of the test suite, with all those tests now\n> squashed into the main test patch, and also the\n> tell-Windows-to-be-more-like-Unix patch. Registered in the\n> commitfest.\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:55:43 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "2022年10月25日(火) 13:12 Thomas Munro <thomas.munro@gmail.com>:\n>\n> I pushed the bug fixes from this series, without their accompanying\n> tests. Here's a rebase of the test suite, with all those tests now\n> squashed into the main test patch, and also the\n> tell-Windows-to-be-more-like-Unix patch. Registered in the\n> commitfest.\n\nFor reference: https://commitfest.postgresql.org/40/3951/\n\nFor my understanding, does this entry supersede the proposal in\nhttps://commitfest.postgresql.org/40/3347/ ?\n\n(Apologies for the previous mail with no additional content).\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:58:03 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 8:58 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> For my understanding, does this entry supersede the proposal in\n> https://commitfest.postgresql.org/40/3347/ ?\n\nI think so (Victor hasn't commented). Patch 0004 derives from\nVictor's patch and has him as primary author still, but I made some\nchanges:\n\n* remove obsolete version check code\n* provide fallback code for systems where it doesn't work (after some\nresearch to determine that there are such systems, and what they do)\n* test that it's really more POSIX-like and demonstrate what that\nmeans (building on 0003)\n\nPatch 0003 is a set of file system semantics tests that work on Unix,\nbut also exercise those src/port/*.c wrappers on Windows and show\ndifferences from Unix semantics. Some of these tests also verify\nvarious bugfixes already committed, so they've been pretty useful to\nme already even though they aren't in the tree yet.\n\nPatches 0001 and 0002 are generic, unrelated to this Windows stuff,\nand provide a simple way to write unit tests for small bits of C code\nwithout a whole PostgreSQL server. That's something that has been\nproposed in the abstract many times before by many people. Here I've\ntried to be minimalist about it, just what I needed for the\nhigher-numbered patches, building on existing technologies (TAP).\n\n\n",
"msg_date": "Mon, 28 Nov 2022 21:53:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "On Tue, 25 Oct 2022 at 09:42, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> I pushed the bug fixes from this series, without their accompanying\n> tests. Here's a rebase of the test suite, with all those tests now\n> squashed into the main test patch, and also the\n> tell-Windows-to-be-more-like-Unix patch. Registered in the\n> commitfest.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n5212d447fa53518458cbe609092b347803a667c5 ===\n=== applying patch\n./v3-0002-meson-Add-infrastructure-for-TAP-tests-written-in.patch\npatching file meson.build\nHunk #5 FAILED at 3000.\nHunk #6 FAILED at 3035.\n2 out of 6 hunks FAILED -- saving rejects to file meson.build.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3951.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 17:36:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 1:06 AM vignesh C <vignesh21@gmail.com> wrote:\n> On Tue, 25 Oct 2022 at 09:42, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I pushed the bug fixes from this series, without their accompanying\n> > tests. Here's a rebase of the test suite, with all those tests now\n> > squashed into the main test patch, and also the\n> > tell-Windows-to-be-more-like-Unix patch. Registered in the\n> > commitfest.\n>\n> The patch does not apply ...\n\nI think this exercise was (if I say so myself) quite useful, to\nunderstand the Windows file system landscape. Maybe the things we\nfigured out by testing are common knowledge to real Windows\nprogrammers, I dunno, but they were certainly all news to me and not\ndocumented anywhere I could find, and the knowledge and tests will\nprobably help in future battles against Windows. The most important\nthings discovered were:\n\n 1. If you're testing on a Windows VM or laptop running 10 or 11 *you\naren't seeing the same behaviour as Windows Server*. So the semantics\ndon't match real production PostgreSQL deployments.\n\n 2. If we decided to turn on the new POSIX unlink semantics\nexplicitly as originally proposed by Victor, we'd get the behaviour we\nreally want on NTFS on all known Windows versions. But that would\nmove the traditional behaviour into a blind spot that we have no\ntesting for: ReFS and SMB. Our tree would probably gain more stuff\nthat doesn't work on them, so that would be tantamount to dropping\nsupport.\n\nTherefore, with regret, I'm going to withdraw this for now. We'd need\nto get CI testing for ReFS and/or SMB first, which could be arranged,\nbut even then, what is the point of POSIX semantics if you don't have\nthem everywhere? You can't even remove any code! Unless we could\nreach consensus that \"PostgreSQL is not supported on SMB or ReFS until\nthey gain POSIX semantics\" [which may never happen for all we know],\nand then commit this patch and forget about non-POSIX unlink semantics\nforever. I don't see us doing that in a hurry. So there's not much\nhope for this idea in this commitfest.\n\nThe little C TAP framework could definitely be useful as a starting\npoint for something else, and the FS semantics test will definitely\ncome in handy if this topic is reopened by some of those potential\nactions or needed to debug existing behaviour, and then I might even\nre-propose parts of it, but it's all here in the archives anyway.\n\n\n",
"msg_date": "Fri, 3 Mar 2023 14:52:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Understanding, testing and improving our Windows filesystem code"
}
] |
[
{
"msg_contents": "Hi,\n\nA couple days ago I posted a WIP patch [1] implementing \"BRIN Sort\",\ni.e. a node producing sorted output for BRIN minmax indexes. One of the\nchallenges I mentioned in that thread is costing - that's actually not\nspecific to that patch, it's an issue affecting BRIN in general, not\njust the proposed node, due to block-range indexes being very different\nfrom regular indexes with explicit tuple pointers.\n\nI mentioned I have some ideas how to improve this, and that I'll start a\nseparate thread to discuss this. So here we go ...\n\n\nThe traditional estimation problem is roughly this:\n\n Given a condition, how many rows will match it?\n\nThat is, given a table with X rows, we need to estimate how many rows\nwill match a WHERE condition, for example. And once we have the row\nestimate, we can estimate the amount of I/O, cost for sorting, etc.\n\nWe have built fairly solid capability to calculate these estimates,\nusing per-column statistics, extended statistics, ... The calculated\nestimates are not always perfect, but in general it works well.\n\nThis affects all path types etc. mostly equally - yes, some paths are\nmore sensitive to poor estimates (e.g. runtime may grow exponentially\nwith increasing rowcount).\n\n\nBRIN indexes however add another layers to this - once we have estimated\nthe number of rows, we need to estimate the number of pages ranges this\nmaps to. You may estimate the WHERE condition to match 1000 rows, but\nthen you need to decide if that's 1 page range, 1000 page ranges or\npossibly even all page ranges for the table.\n\nIt all depends on how \"correlated\" the data is with physical position in\nthe table. If you have perfectly correlated data, it may be enough to\nscan a single page. If it's random, you may need to scan everything.\n\nThe existing costing uses the column correlation statistics, but sadly\nthat's rather insensitive to outlier values. If you have a sequential\ntable, and then set 1% of data to min/max (making the ranges very wide),\nthe correlation will remain very close to 1.0, but you'll have to scan\nall the ranges (and the costing won't reflect that).\n\nThe \"BRIN sort\" patch needs to estimate a different thing - given a page\nrange, how many other page ranges overlap with it? This is roughly the\namount of stuff we'll need to scan and sort in order to produce the\nfirst row.\n\nThese are all things we can't currently estimate - we have some rough\nheuristics, but it's pretty easy to confuse those.\n\nTherefore, I propose to calculate a couple new statistics for BRIN\nindexes (assume minmax indexes, unless mentioned otherwise):\n\n\n1) average number of overlapping ranges\n---------------------------------------\n\nGiven a range, with how many ranges it overlaps? In a perfectly\nsequential table this will be 0, so if you have a value you know it'll\nmatch just one range. In random table, it'll be pretty close to the\nnumber of page ranges.\n\nThis can be calculated by simply walking the ranges, sorted by minval\n(see brin_minmax_count_overlaps).\n\n\n2) average number of matching ranges for a value\n------------------------------------------------\n\nGiven a value, how many ranges it matches? This can be calculated by\nmatching sampled rows to ranges (brin_minmax_match_tuples_to_ranges).\n\nFor minmax indexes this is somewhat complementary to the average number\nof overlaps, the relationship is roughly this:\n\n avg(# of matching ranges) = 1 + avg(number of overlapping ranges)/2\n\nThe intuition is that if you assume a range randomly overlapped by other\nranges, you're likely to hit about 1/2 of them.\n\nThe reason why we want to calculate both (1) and (2) is that for other\nopclasses the relationship is not that simple. For bloom opclasses we\nprobably can't calculate overlaps at all (or at least not that easily),\nso the average number of matches is all we have. For minmax-multi, the\noverlaps will probably use only the min/max values, ignoring the \"gaps\",\nbut the matches should use the gaps.\n\n\n3) a bunch of other simple statistics\n-------------------------------------\n\nThese are number of summarized / not-summarized ranges, all_nulls and\nhas_nulls ranges, which is useful to estimate IS NULL conditions etc.\n\n\nThe attached patch implements a PoC of this. There's a new GUC\n(enable_indexam_stats) that can be used to enable/disable this (both the\nANALYZE and costing part). By default it's \"off\" so make sure to do\n\n SET enable_indexam_stats = true;\n\nThe statistics is stored in pg_statistics catalog, in a new staindexam\ncolumn (with bytea). The opclasses can implement a new support\nprocedure, similarly to what we do of opclass options. There's a couple\nof wrinkles (should be explained in XXX comments), but in principle this\nworks.\n\nThe brin_minmax_stats procedure implements this for minmax opclasses,\ncalculating the stuff mentioned above. I've been experimenting with\ndifferent ways to calculate some of the stuff, and ANALYZE prints info\nabout the calculated values and timings (this can be disabled by\nremoving the STATS_CROSS_CHECK define).\n\nFinally, brincostestimate() loads the statistics and uses it for\ncosting. At the moment it uses only the average number of overlaps.\n\nTrivial example:\n\ncreate table t (a int) with (fillfactor = 10);\n\n insert into t\n select (case when mod(i,22) = 0 then 100000000\n when mod(i,22) = 1 then 0\n else i end)\n from generate_series(1,300000) s(i);\n\n create index on t using brin (a) with (pages_per_range = 1);\n\nThe table fits 22 rows per page, and the data is mostly sequential,\nexcept that every page has both 0 and 100000000. The correlation however\nremains fairly high:\n\n # select correlation from pg_stats where tablename = 't';\n correlation\n -------------\n 0.8303595\n (1 row)\n\nNow, let's do a simple query:\n\n# explain (analyze, buffers, timing off) select * from t where a = 500;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=154.00..254.92 rows=2 width=4)\n (actual rows=1 loops=1)\n Recheck Cond: (a = 500)\n Rows Removed by Index Recheck: 299999\n Heap Blocks: lossy=13637\n Buffers: shared hit=13695\n -> Bitmap Index Scan on t_a_idx (cost=0.00..154.00 rows=26 width=0)\n (actual rows=136370 loops=1)\n Index Cond: (a = 500)\n Buffers: shared hit=58\n Planning:\n Buffers: shared hit=1\n Planning Time: 0.173 ms\n Execution Time: 101.972 ms\n(12 rows)\n\nThat's pretty poor, because brincostestimate() still thinks it'll be\nenough to read one or two page ranges (because 1/0.8 = ~1.2).\n\nNow, with the extra statistics:\n\nSET enable_indexam_stats = true;\nANALYZE t;\n\n QUERY PLAN\n----------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=157.41..17544.41 rows=2 width=4)\n (actual rows=1 loops=1)\n Recheck Cond: (a = 500)\n Rows Removed by Index Recheck: 299999\n Heap Blocks: lossy=13637\n Buffers: shared hit=13695\n -> Bitmap Index Scan on t_a_idx (cost=0.00..157.41 rows=300000\n width=0) (actual rows=136370 loops=1)\n Index Cond: (a = 500)\n Buffers: shared hit=58\n Planning:\n Buffers: shared hit=1\n Planning Time: 0.230 ms\n Execution Time: 104.603 ms\n(12 rows)\n\nSo in this case we realize we actually have to scan the whole table, all\n~13637 ranges, and the cost reflects that.\n\nFeel free to experiment with other data sets.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/e70fa091-e338-1598-9de4-6d0ef6b693e2%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 18 Oct 2022 13:33:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "PATCH: AM-specific statistics, with an example implementation for\n BRIN (WIP)"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-18 13:33:59 +0200, Tomas Vondra wrote:\n> I mentioned I have some ideas how to improve this, and that I'll start a\n> separate thread to discuss this. So here we go ...\n\nThis CF entry has been failing tests since it was submitted. Are you planning\nto work on this further? If not I think we should just close the CF entry.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:34:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: AM-specific statistics, with an example implementation\n for BRIN (WIP)"
}
] |
[
{
"msg_contents": "PSA trivial patch to fix a code comment typo seen during a recent review.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 19 Oct 2022 10:09:12 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in code comment"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 10:09:12AM +1100, Peter Smith wrote:\n> PSA trivial patch to fix a code comment typo seen during a recent review.\n\nPassing by.. And fixed. Thanks!\n--\nMichael",
"msg_date": "Wed, 19 Oct 2022 10:29:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in code comment"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 12:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 19, 2022 at 10:09:12AM +1100, Peter Smith wrote:\n> > PSA trivial patch to fix a code comment typo seen during a recent review.\n>\n> Passing by.. And fixed. Thanks!\n> --\n\nThanks for passing by.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 19 Oct 2022 12:46:42 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in code comment"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThe current code comment says that the replication stream on a slot with\nthe given targetLSN can't continue after a restart but even without a\nrestart the stream cannot continue. The slot is invalidated and the\nwalsender process is terminated by the checkpoint process. Attaching a\nsmall patch to fix the comment.\n\n2022-10-19 06:26:22.387 UTC [144482] STATEMENT: START_REPLICATION SLOT\n\"s2\" LOGICAL 0/0\n2022-10-19 06:27:41.998 UTC [2553755] LOG: checkpoint starting: time\n2022-10-19 06:28:04.974 UTC [2553755] LOG: terminating process 144482 to\nrelease replication slot \"s2\"\n2022-10-19 06:28:04.974 UTC [144482] FATAL: terminating connection due to\nadministrator command\n2022-10-19 06:28:04.974 UTC [144482] CONTEXT: slot \"s2\", output plugin\n\"test_decoding\", in the change callback, associated LSN 0/1E23AB68\n2022-10-19 06:28:04.974 UTC [144482] STATEMENT: START_REPLICATION SLOT\n\"s2\" LOGICAL 0/0\n\nThanks,\nSirisha",
"msg_date": "Wed, 19 Oct 2022 00:09:09 -0700",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n return value"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 12:39 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> The current code comment says that the replication stream on a slot with the given targetLSN can't continue after a restart but even without a restart the stream cannot continue. The slot is invalidated and the walsender process is terminated by the checkpoint process. Attaching a small patch to fix the comment.\n>\n> 2022-10-19 06:26:22.387 UTC [144482] STATEMENT: START_REPLICATION SLOT \"s2\" LOGICAL 0/0\n> 2022-10-19 06:27:41.998 UTC [2553755] LOG: checkpoint starting: time\n> 2022-10-19 06:28:04.974 UTC [2553755] LOG: terminating process 144482 to release replication slot \"s2\"\n> 2022-10-19 06:28:04.974 UTC [144482] FATAL: terminating connection due to administrator command\n> 2022-10-19 06:28:04.974 UTC [144482] CONTEXT: slot \"s2\", output plugin \"test_decoding\", in the change callback, associated LSN 0/1E23AB68\n> 2022-10-19 06:28:04.974 UTC [144482] STATEMENT: START_REPLICATION SLOT \"s2\" LOGICAL 0/0\n\nI think the walsender/replication stream can still continue even\nbefore the checkpointer signals it to terminate, there's an\nilluminating comment (see [1]) specifying when it can happen. It means\nthat the GetWALAvailability() can return WALAVAIL_REMOVED but the\ncheckpointer hasn't yet signalled/in the process of signalling the\nwalsender to terminate.\n\n * * WALAVAIL_REMOVED means it has been removed. A replication stream on\n * a slot with this LSN cannot continue after a restart.\n\nThe above existing comment, says that the slot isn't usable if\n\"someone\" (either checkpoitner or walsender or entire server itself)\ngot restarted. It looks fine, no?\n\n[1]\n case WALAVAIL_REMOVED:\n\n /*\n * If we read the restart_lsn long enough ago, maybe that file\n * has been removed by now. However, the walsender could have\n * moved forward enough that it jumped to another file after\n * we looked. If checkpointer signalled the process to\n * termination, then it's definitely lost; but if a process is\n * still alive, then \"unreserved\" seems more appropriate.\n *\n * If we do change it, save the state for safe_wal_size below.\n */\n if (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n {\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Oct 2022 13:06:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix GetWALAvailability function code comments for\n WALAVAIL_REMOVED return value"
},
{
"msg_contents": "At Wed, 19 Oct 2022 13:06:08 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Oct 19, 2022 at 12:39 PM sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> >\n> > The current code comment says that the replication stream on a slot with the given targetLSN can't continue after a restart but even without a restart the stream cannot continue. The slot is invalidated and the walsender process is terminated by the checkpoint process. Attaching a small patch to fix the comment.\n\nThe description was correct at the early stage of the development of\nmax_slot_wal_keep_size_mb. At that time the affected processes\n(walsenders) are not actively killed. On the other hand, the current\nimplement actively kills the affected walsenders before removing\nsegments. Thus considering the whole current mechanism of this\nfeature, WALAVAIL_REMOVED represents that \"the segment has been\nremoved, along with the affected processeses having been already\nkilled, too\".\n\nIn short, the proposed fix alone seems fine to me. If we want to show\nfurther details, I would add a bit as follows.\n\n| * * WALAVAIL_REMOVED means it has been removed. A replication stream on\n| * a slot with this LSN cannot continue. Note that the affected\n| * processes have been terminated by checkpointer, too.\n\n\n> I think the walsender/replication stream can still continue even\n> before the checkpointer signals it to terminate, there's an\n> illuminating comment (see [1]) specifying when it can happen. It means\n\nIt is a description about the possible advancement of restart_lsn\nafter the function reads it. So the point is a bit off the said\nproposal.\n \n> that the GetWALAvailability() can return WALAVAIL_REMOVED but the\n> checkpointer hasn't yet signalled/in the process of signalling the\n> walsender to terminate.\n> \n> * * WALAVAIL_REMOVED means it has been removed. A replication stream on\n> * a slot with this LSN cannot continue after a restart.\n> \n> The above existing comment, says that the slot isn't usable if\n> \"someone\" (either checkpoitner or walsender or entire server itself)\n> got restarted. It looks fine, no?\n>\n> [1]\n> case WALAVAIL_REMOVED:\n> \n> /*\n> * If we read the restart_lsn long enough ago, maybe that file\n> * has been removed by now. However, the walsender could have\n> * moved forward enough that it jumped to another file after\n> * we looked. If checkpointer signalled the process to\n> * termination, then it's definitely lost; but if a process is\n> * still alive, then \"unreserved\" seems more appropriate.\n> *\n> * If we do change it, save the state for safe_wal_size below.\n> */\n> if (!XLogRecPtrIsInvalid(slot_contents.data.restart_lsn))\n> {\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Oct 2022 11:59:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix GetWALAvailability function code comments for\n WALAVAIL_REMOVED return value"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 7:59 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 19 Oct 2022 13:06:08 +0530, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote in\n> > On Wed, Oct 19, 2022 at 12:39 PM sirisha chamarthi\n> > <sirichamarthi22@gmail.com> wrote:\n> > >\n> > > The current code comment says that the replication stream on a slot\n> with the given targetLSN can't continue after a restart but even without a\n> restart the stream cannot continue. The slot is invalidated and the\n> walsender process is terminated by the checkpoint process. Attaching a\n> small patch to fix the comment.\n>\n> In short, the proposed fix alone seems fine to me. If we want to show\n> further details, I would add a bit as follows.\n>\n> | * * WALAVAIL_REMOVED means it has been removed. A replication stream on\n> | * a slot with this LSN cannot continue. Note that the affected\n> | * processes have been terminated by checkpointer, too.\n>\n\nThanks for your comments! Attached the patch with your suggestions.\n\nThanks,\nSirisha",
"msg_date": "Thu, 20 Oct 2022 09:40:24 -0700",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix GetWALAvailability function code comments for\n WALAVAIL_REMOVED return value"
},
{
"msg_contents": "sirisha chamarthi <sirichamarthi22@gmail.com> writes:\n> On Wed, Oct 19, 2022 at 7:59 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n>> In short, the proposed fix alone seems fine to me. If we want to show\n>> further details, I would add a bit as follows.\n>> \n>> | * * WALAVAIL_REMOVED means it has been removed. A replication stream on\n>> | * a slot with this LSN cannot continue. Note that the affected\n>> | * processes have been terminated by checkpointer, too.\n\n> Thanks for your comments! Attached the patch with your suggestions.\n\nPushed with a bit of additional wordsmithing. I thought \"have been\"\nwas a bit too strong of an assertion considering that this function\ndoes not pay any attention to the actual state of any processes,\nso I made it say \"should have been\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Jan 2023 18:43:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix GetWALAvailability function code comments for\n WALAVAIL_REMOVED return value"
},
{
"msg_contents": "At Thu, 19 Jan 2023 18:43:52 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> sirisha chamarthi <sirichamarthi22@gmail.com> writes:\n> > On Wed, Oct 19, 2022 at 7:59 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> > wrote:\n> >> In short, the proposed fix alone seems fine to me. If we want to show\n> >> further details, I would add a bit as follows.\n> >> \n> >> | * * WALAVAIL_REMOVED means it has been removed. A replication stream on\n> >> | * a slot with this LSN cannot continue. Note that the affected\n> >> | * processes have been terminated by checkpointer, too.\n> \n> > Thanks for your comments! Attached the patch with your suggestions.\n> \n> Pushed with a bit of additional wordsmithing. I thought \"have been\"\n\nThanks!\n\n> was a bit too strong of an assertion considering that this function\n> does not pay any attention to the actual state of any processes,\n> so I made it say \"should have been\".\n\nI think you're correct here.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 09:47:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix GetWALAvailability function code comments for\n WALAVAIL_REMOVED return value"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI happened to notice $subject. Attach a trivial patch for that.\n\nThanks\nRichard",
"msg_date": "Wed, 19 Oct 2022 16:02:34 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some comments that should've covered MERGE"
},
{
"msg_contents": "On 2022-Oct-19, Richard Guo wrote:\n\n> Hi hackers,\n> \n> I happened to notice $subject. Attach a trivial patch for that.\n\nThanks, applied. I did change the comment atop setTargetTable, which I\nthought could use a little bit more detail on what is happening, and\nalso in its callsite in transformMergeStmt.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)\n\n\n",
"msg_date": "Mon, 24 Oct 2022 12:59:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Some comments that should've covered MERGE"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 6:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Oct-19, Richard Guo wrote:\n>\n> > Hi hackers,\n> >\n> > I happened to notice $subject. Attach a trivial patch for that.\n>\n> Thanks, applied. I did change the comment atop setTargetTable, which I\n> thought could use a little bit more detail on what is happening, and\n> also in its callsite in transformMergeStmt.\n\n\nThanks for the fix!\n\nThanks\nRichard\n\nOn Mon, Oct 24, 2022 at 6:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Oct-19, Richard Guo wrote:\n\n> Hi hackers,\n> \n> I happened to notice $subject. Attach a trivial patch for that.\n\nThanks, applied. I did change the comment atop setTargetTable, which I\nthought could use a little bit more detail on what is happening, and\nalso in its callsite in transformMergeStmt. Thanks for the fix!ThanksRichard",
"msg_date": "Tue, 25 Oct 2022 10:15:18 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some comments that should've covered MERGE"
}
] |
[
{
"msg_contents": "HI,\n\nI noticed that the tab completion for ALTER STATISTICS .. SET was not\nhandled. The attached patch displays SCHEMA and STATISTICS for tab\ncompletion of ALTER STATISTICS name SET.\n\nRegards,\nVignesh",
"msg_date": "Wed, 19 Oct 2022 16:06:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improve tab completion for ALTER STATISTICS"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 04:06:51PM +0530, vignesh C wrote:\n> I noticed that the tab completion for ALTER STATISTICS .. SET was not\n> handled. The attached patch displays SCHEMA and STATISTICS for tab\n> completion of ALTER STATISTICS name SET.\n\nIndeed, it is a bit strange as we would get a list of settable\nparameters once the completion up to SET is done, rather than\nSTATISTICS and SCHEMA. Your patch looks fine, so applied. Thanks!\n--\nMichael",
"msg_date": "Mon, 24 Oct 2022 16:00:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER STATISTICS"
},
{
"msg_contents": "On Mon, 24 Oct 2022 at 12:30, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 19, 2022 at 04:06:51PM +0530, vignesh C wrote:\n> > I noticed that the tab completion for ALTER STATISTICS .. SET was not\n> > handled. The attached patch displays SCHEMA and STATISTICS for tab\n> > completion of ALTER STATISTICS name SET.\n>\n> Indeed, it is a bit strange as we would get a list of settable\n> parameters once the completion up to SET is done, rather than\n> STATISTICS and SCHEMA. Your patch looks fine, so applied. Thanks!\n\nThanks for pushing this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 25 Oct 2022 08:48:11 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER STATISTICS"
}
] |
[
{
"msg_contents": "When standby is recovering to a timeline that doesn't have any segments\narchived yet it will just blindly blow past the timeline switch point and\nkeeps on recovering on the old timeline. Typically that will eventually\nresult in an error about incorrect prev-link, but under unhappy\ncircumstances can result in standby silently having different contents.\n\nAttached is a shell script that reproduces the issue. Goes back to at least\nv12, probably longer.\n\nI think we should be keeping track of where the current replay timeline is\ngoing to end and not read any records past it on the old timeline. Maybe\nwhile at it, we should also track that the next record should be a\ncheckpoint record for the timeline switch and error out if not. Thoughts?\n\n-- \n\nAnts Aasma\nSenior Database Engineerwww.cybertec-postgresql.com",
"msg_date": "Wed, 19 Oct 2022 18:50:09 +0300",
"msg_from": "Ants Aasma <ants@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Standby recovers records from wrong timeline"
},
{
"msg_contents": "At Wed, 19 Oct 2022 18:50:09 +0300, Ants Aasma <ants@cybertec.at> wrote in \n> When standby is recovering to a timeline that doesn't have any segments\n> archived yet it will just blindly blow past the timeline switch point and\n> keeps on recovering on the old timeline. Typically that will eventually\n> result in an error about incorrect prev-link, but under unhappy\n> circumstances can result in standby silently having different contents.\n> \n> Attached is a shell script that reproduces the issue. Goes back to at least\n> v12, probably longer.\n> \n> I think we should be keeping track of where the current replay timeline is\n> going to end and not read any records past it on the old timeline. Maybe\n> while at it, we should also track that the next record should be a\n> checkpoint record for the timeline switch and error out if not. Thoughts?\n\nprimary_restored did a time-travel to past a bit because of the\nrecovery_target=immediate. In other words, the primary_restored and\nthe replica diverge. I don't think it is legit to connect a diverged\nstandby to a primary.\n\nSo, about the behavior in doubt, it is the correct behavior to\nseemingly ignore the history file in the archive. Recovery assumes\nthat the first half of the first segment of the new timeline is the\nsame with the same segment of the old timeline (.partial) so it is\nlegit to read the <tli=1,seg=2> file til the end and that causes the\nreplica goes beyond the divergence point.\n\nAs you know, when new primary starts a diverged history, the\nrecommended way is to blow (or stash) away the archive, then take a\nnew backup from the running primary.\n\nIf you don't want to trash all the past backups, remove the archived\nfiles equals to or after the divergence point before starting the\nstandby. They're <tli=2,seg=2,3> in this case. Also you must remove\nreplica/pg_wal/<tli=2,seg=2> before starting the replica. That file\ncauses recovery run beyond the divergence point before fetching from\narchive or stream.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Oct 2022 17:29:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "Sorry, a correction needed..\n\nAt Thu, 20 Oct 2022 17:29:57 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 19 Oct 2022 18:50:09 +0300, Ants Aasma <ants@cybertec.at> wrote in \n> > When standby is recovering to a timeline that doesn't have any segments\n> > archived yet it will just blindly blow past the timeline switch point and\n> > keeps on recovering on the old timeline. Typically that will eventually\n> > result in an error about incorrect prev-link, but under unhappy\n> > circumstances can result in standby silently having different contents.\n> > \n> > Attached is a shell script that reproduces the issue. Goes back to at least\n> > v12, probably longer.\n> > \n> > I think we should be keeping track of where the current replay timeline is\n> > going to end and not read any records past it on the old timeline. Maybe\n> > while at it, we should also track that the next record should be a\n> > checkpoint record for the timeline switch and error out if not. Thoughts?\n> \n> primary_restored did a time-travel to past a bit because of the\n> recovery_target=immediate. In other words, the primary_restored and\n> the replica diverge. I don't think it is legit to connect a diverged\n> standby to a primary.\n> \n> So, about the behavior in doubt, it is the correct behavior to\n> seemingly ignore the history file in the archive. Recovery assumes\n> that the first half of the first segment of the new timeline is the\n> same with the same segment of the old timeline (.partial) so it is\n> legit to read the <tli=1,seg=2> file til the end and that causes the\n> replica goes beyond the divergence point.\n> \n> As you know, when new primary starts a diverged history, the\n> recommended way is to blow (or stash) away the archive, then take a\n> new backup from the running primary.\n> \n> If you don't want to trash all the past backups, remove the archived\n> files equals to or after the divergence point before starting the\n> standby. They're <tli=2,seg=2,3> in this case. Also you must remove\n\n<tli=2,seg=2,3> => <tli=1,seg=2,3>\n\n> replica/pg_wal/<tli=2,seg=2> before starting the replica. That file\n> causes recovery run beyond the divergence point before fetching from\n> archive or stream.\n\n<tli=2,seg=2> => <tli=1,seg=2> \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Oct 2022 17:34:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "Forgot a caveat.\n\nAt Thu, 20 Oct 2022 17:34:13 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 19 Oct 2022 18:50:09 +0300, Ants Aasma <ants@cybertec.at> wrote in \n> > When standby is recovering to a timeline that doesn't have any segments\n> > archived yet it will just blindly blow past the timeline switch point and\n> > keeps on recovering on the old timeline. Typically that will eventually\n> > result in an error about incorrect prev-link, but under unhappy\n> > circumstances can result in standby silently having different contents.\n> > \n> > Attached is a shell script that reproduces the issue. Goes back to at least\n> > v12, probably longer.\n> > \n> > I think we should be keeping track of where the current replay timeline is\n> > going to end and not read any records past it on the old timeline. Maybe\n> > while at it, we should also track that the next record should be a\n> > checkpoint record for the timeline switch and error out if not. Thoughts?\n> \n> primary_restored did a time-travel to past a bit because of the\n> recovery_target=immediate. In other words, the primary_restored and\n> the replica diverge. I don't think it is legit to connect a diverged\n> standby to a primary.\n> \n> So, about the behavior in doubt, it is the correct behavior to\n> seemingly ignore the history file in the archive. Recovery assumes\n> that the first half of the first segment of the new timeline is the\n> same with the same segment of the old timeline (.partial) so it is\n> legit to read the <tli=1,seg=2> file til the end and that causes the\n> replica goes beyond the divergence point.\n> \n> As you know, when new primary starts a diverged history, the\n> recommended way is to blow (or stash) away the archive, then take a\n> new backup from the running primary.\n> \n> If you don't want to trash all the past backups, remove the archived\n> files equals to or after the divergence point before starting the\n> standby. They're <tli=1,seg=2,3> in this case. Also you must remove\n> replica/pg_wal/<tli=1,seg=2> before starting the replica. That file\n> causes recovery run beyond the divergence point before fetching from\n> archive or stream.\n\nThe reason this is workable is (as far as I can see) using\nrecovery_target=immediate to stop replication and the two clusters\nshare the completely identical disk image. Otherwise this steps\nresults in a broken standby.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Oct 2022 17:47:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "On Thu, 20 Oct 2022 at 11:30, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> primary_restored did a time-travel to past a bit because of the\n> recovery_target=immediate. In other words, the primary_restored and\n> the replica diverge. I don't think it is legit to connect a diverged\n> standby to a primary.\n\nprimary_restored did timetravel to the past, as we're doing PITR on the\nprimary that's the expected behavior. However replica is not diverged,\nit's a copy of the exact same basebackup. The usecase is restoring a\ncluster from backup using PITR and using the same backup to create a\nstandby. Currently this breaks when primary has not yet archived any\nsegments.\n\n> So, about the behavior in doubt, it is the correct behavior to\n> seemingly ignore the history file in the archive. Recovery assumes\n> that the first half of the first segment of the new timeline is the\n> same with the same segment of the old timeline (.partial) so it is\n> legit to read the <tli=1,seg=2> file til the end and that causes the\n> replica goes beyond the divergence point.\n\nWhat is happening is that primary_restored has a timeline switch at\ntli 2, lsn 0/2000100, and the next insert record starts in the same\nsegment. Replica is starting on the same backup on timeline 1, tries to\nfind tli 2 seg 2, which is not archived yet, so falls back to tli 1 seg 2\nand replays tli 1 seg 2 continuing to tli seg 3, then connects to primary\nand starts applying wal starting from tli 2 seg 4. To me that seems\ncompletely broken.\n\n> As you know, when new primary starts a diverged history, the\n> recommended way is to blow (or stash) away the archive, then take a\n> new backup from the running primary.\n\nMy understanding is that backup archives are supposed to remain valid\neven after PITR or equivalently a lagging standby promoting.\n\n--\nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 20 Oct 2022 14:44:40 +0300",
"msg_from": "Ants Aasma <ants@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "At Thu, 20 Oct 2022 14:44:40 +0300, Ants Aasma <ants@cybertec.at> wrote in \n> My understanding is that backup archives are supposed to remain valid\n> even after PITR or equivalently a lagging standby promoting.\n\nSorry, I was dim because of maybe catching a cold:p\n\nOn second thought. everything works fine if the first segment of the\nnew timeline is archived in this case. So the problem here is whether\nrecovery should wait for a known new timline when no segment on the\nnew timeline is available yet. As you say, I think it is sensible\nthat recovery waits at the divergence LSN for the first segment on the\nnew timeline before proceeding on the same timeline.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 21 Oct 2022 16:45:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "At Fri, 21 Oct 2022 16:45:59 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 20 Oct 2022 14:44:40 +0300, Ants Aasma <ants@cybertec.at> wrote in \n> > My understanding is that backup archives are supposed to remain valid\n> > even after PITR or equivalently a lagging standby promoting.\n> \n> Sorry, I was dim because of maybe catching a cold:p\n> \n> On second thought. everything works fine if the first segment of the\n> new timeline is archived in this case. So the problem here is whether\n> recovery should wait for a known new timline when no segment on the\n> new timeline is available yet. As you say, I think it is sensible\n> that recovery waits at the divergence LSN for the first segment on the\n> new timeline before proceeding on the same timeline.\n\nIt is simpler than anticipated. Just not descending timelines when\nlatest works. It dones't consider the case of explict target timlines\nso it's just a PoC. (So this doesn't work if recovery_target_timeline\nis set to 2 for the \"standby\" in the repro.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 21 Oct 2022 17:12:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "At Fri, 21 Oct 2022 17:12:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> latest works. It dones't consider the case of explict target timlines\n> so it's just a PoC. (So this doesn't work if recovery_target_timeline\n> is set to 2 for the \"standby\" in the repro.)\n\nSo, finally I noticed that the function XLogFileReadAnyTLI is not\nneeded at all if we are going this direction.\n\nRegardless of recvoery_target_timeline is latest or any explicit\nimeline id or checkpoint timeline, what we can do to reach the target\ntimline is just to follow the history file's direction.\n\nIf segments are partly gone while reading on a timeline, a segment on\nthe older timelines is just a crap since it should be incompatible.\n\nSo.. I'm at a loss about what the function is for.\n\nPlease anyone tell me why do we need the behavior of\nXLogFileReadAnyTLI() at all?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 21 Oct 2022 17:44:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "At Fri, 21 Oct 2022 17:44:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 21 Oct 2022 17:12:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > latest works. It dones't consider the case of explict target timlines\n> > so it's just a PoC. (So this doesn't work if recovery_target_timeline\n> > is set to 2 for the \"standby\" in the repro.)\n> \n> So, finally I noticed that the function XLogFileReadAnyTLI is not\n> needed at all if we are going this direction.\n> \n> Regardless of recvoery_target_timeline is latest or any explicit\n> imeline id or checkpoint timeline, what we can do to reach the target\n> timline is just to follow the history file's direction.\n> \n> If segments are partly gone while reading on a timeline, a segment on\n> the older timelines is just a crap since it should be incompatible.\n> \n> So.. I'm at a loss about what the function is for.\n> \n> Please anyone tell me why do we need the behavior of\n> XLogFileReadAnyTLI() at all?\n\nIt is introduced by 1bb2558046. And the behavior dates back to 2042b3428d.\n\nHmmm.. XLogFileRead() at the time did essentially the same thing to\nthe current XLogFileReadAnyTLI. At that time the expectedTL*I*s\ncontained only timeline IDs. Thus it seems to me, at that time,\nrecovery assumed that it is fine with reading the segment on the\ngreatest available timeline in the TLI list at every mement. (Mmm. I\ncannot describe this precise enough....) In other words it did not\nintend to use the segments on the older timelines than expected as the\nreplacement of the segment on the correct timelnie.\n\nIf this is correct (I hople the description above makes sense), now\nthat we can determine the exact TLI to read for the specified segno,\nwe don't need to descend to older timelines. In other words, the\ncurrent XLogFileReadAnyTLI() should be just XLogFileReadOnHistory(),\nwhich reads a segment of the exact timeline calculated from the\nexpectedTLEs and the segno.\n\nI'm going to work in this direction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 21 Oct 2022 18:38:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Standby recovers records from wrong timeline"
},
{
"msg_contents": "On Fri, 21 Oct 2022 at 11:44, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 21 Oct 2022 17:12:45 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > latest works. It dones't consider the case of explict target timlines\n> > so it's just a PoC. (So this doesn't work if recovery_target_timeline\n> > is set to 2 for the \"standby\" in the repro.)\n>\n> So, finally I noticed that the function XLogFileReadAnyTLI is not\n> needed at all if we are going this direction.\n>\n> Regardless of recvoery_target_timeline is latest or any explicit\n> imeline id or checkpoint timeline, what we can do to reach the target\n> timline is just to follow the history file's direction.\n>\n> If segments are partly gone while reading on a timeline, a segment on\n> the older timelines is just a crap since it should be incompatible.\n\nI came to the same conclusion. I adjusted XLogFileReadAnyTLI to not use any\ntimeline that ends within the segment (attached patch). At this point the\nname of the function becomes really wrong, XLogFileReadCorrectTLI or\nsomething to that effect would be much more descriptive and the code could\nbe simplified.\n\nHowever I'm not particularly happy with this approach as it will not use\nvalid WAL if that is not available. Consider scenario of a cascading\nfailure. Node A has a hard failure, then node B promotes, archives history\nfile, but doesn't see enough traffic to archive a full segment before\nfailing itself. While this is happening we restore node A from backup and\nstart it up as a standby.\n\nIf node b fails before node A has a chance to connect then either we are\ncontinuing recovery on the wrong timeline (current behavior) or we will\nnot try to recover the first portion of the archived WAL file (with patch).\n\nSo I think the correct approach would still be to have ReadRecord() or\nApplyWalRecord() determine that switching timelines is needed.\n\n-- \nAnts Aasma\nwww.cybertec-postgresql.com",
"msg_date": "Fri, 21 Oct 2022 12:48:36 +0300",
"msg_from": "Ants Aasma <ants@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Standby recovers records from wrong timeline"
}
] |
[
{
"msg_contents": "Hi,\n\nshortly after shared memory stats went in I had a conversation with Lukas\nabout what it'd enable us going forward. I also chatted with Peter about\nautovacuum related stats. I started to write an email, but then somehow lost\nthe draft and couldn't bring myself to start from scratch.\n\n\nHere's a largely unordered list of ideas. I'm not planning to work on them\nmyself, but thought it'd nevertheless be useful to have them memorialized\nsomewhere.\n\n\n1) Track some statistics based on relfilenodes rather than oids\n\nWe currently track IO related statistics as part of the normal relation\nstats. The problem is that that prevents us from collecting stats whenever we\noperate on a relfilenode, rather than a Relation.\n\nWe e.g. currently can't track the number of blocks written out in a relation,\nbecause we don't have a Relation at that point. Nor can't we really get hold\nof one, as the writeback can happen in a different database without access to\npg_class. Which is also the reason why the per-relation IO stats aren't\npopulated by the startup process, even though it'd obviously sometimes be\nhelpful to know where the most IO time is spent on a standby.\n\nThere's also quite a bit of contortions of the bufmgr interface related to\nthis.\n\nI think the solution to this is actually fairly simple: We split the IO\nrelated statistics out from the relation statistics, and track them on a\nrelfilenode basis instead. That'd allow us to track all the IO stats from all\nthe places, rather than the partial job we do right now.\n\n\n2) Split index and table statistics into different types of stats\n\nWe track both types of statistics in the same format and rename column in\nviews etc to make them somewhat sensible. A number of the \"columns\" in index\nstats are currently unused.\n\nIf we split the stats for indexes and relations we can have reasonable names\nfor the fields, shrink the current memory usage by halfing the set of fields\nwe keep for indexes, and extend the stats in a more targeted fashion.\n\n\nThis e.g. would allow us keep track of the number of index entries killed via\nthe killtuples mechanism, which in turn would allow us to more intelligently\ndecide whether we should vacuum indexes (often the most expensive part of\nvacuum). In a lot of workload killtuples takes care of most of the cleanup,\nbut in others it doesn't do much.\n\n\n3) Maintain more historical statistics about vacuuming\n\nWe currently track the last time a table was vacuumed, the number of times it\nwas vacuumed and a bunch of counters for the number of modified tuples since\nthe last vacuum.\n\nHowever, none of that allows the user to identify which relations are causing\nautovacuum to not keep up. Even just keeping track of the the total time\nautovacuum has spent on certain relations would be a significant improvement,\nwith more easily imaginable (total IO [time], autovacuum delay time, xid age).\n\n\n4) Make the stats mechanism extensible\n\nMost of the work towards this has already been done, but a bit more work is\nnecessary. The hardest likely is how to identify stats belonging to an\nextension across restarts.\n\nThere's a bunch of extensions with their own stats mechanisms, but it's hard\nto get this stuff right from the outside.\n\n\n5) Use extensible shared memory stats to store pg_stat_statements data\n\npg_stat_statements current mechanism has a few issues. The top ones I know of\nare:\n\n- Contention on individual stats entries when the same queryid is executed\n concurrently. pgstats deals with this by allowing stats to be collected in\n backend local memory and to be flushed into shared stats at a lower\n frequency.\n\n- The querytext file can get huge (I've seen > 100GB) and cause massive\n slowdowns. It's better than the old fixed-length, fixed-shared-memory\n mechansism, don't get me wrong. But we can do better by storing the data in\n dynamic shared memory and then also support trimming based on the total\n size.\n\n\nThere were some other things, but I can't remember them right now.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Oct 2022 11:19:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "shared memory stats ideas"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 11:19 AM Andres Freund <andres@anarazel.de> wrote:\n> We e.g. currently can't track the number of blocks written out in a relation,\n> because we don't have a Relation at that point. Nor can't we really get hold\n> of one, as the writeback can happen in a different database without access to\n> pg_class. Which is also the reason why the per-relation IO stats aren't\n> populated by the startup process, even though it'd obviously sometimes be\n> helpful to know where the most IO time is spent on a standby.\n>\n> There's also quite a bit of contortions of the bufmgr interface related to\n> this.\n\nThis seems related to the difficulty with distinguishing between\ninternal pages and leaf pages (or some generalized AM-agnostic\ndefinition) in views like pg_statio_*_indexes.\n\nDifferentiating between leaf pages and internal pages would definitely\nbe a big improvement, but it's kind of an awkward thing to implement\n[1] because you have to somehow invent the general concept of multiple\ndistinct kinds of buffers/pages within a relation. A lot of code would\nneed to be taught about that.\n\nThis work would be more likely to actually happen if it was tied to\nsome bigger project that promised other benefits.\n\n> 2) Split index and table statistics into different types of stats\n\n> This e.g. would allow us keep track of the number of index entries killed via\n> the killtuples mechanism, which in turn would allow us to more intelligently\n> decide whether we should vacuum indexes (often the most expensive part of\n> vacuum). In a lot of workload killtuples takes care of most of the cleanup,\n> but in others it doesn't do much.\n\nWhile I do agree that it would be nice to record information about the\nnumber of deletion operations per index, that information will still\nbe tricky to interpret and act upon relative to other kinds of\ninformation. As a general rule, we should prefer to focus on signals\nthat show things really aren't going well in some specific and\nunambiguous way. Signals about things that are going well seem harder\nto work with -- they don't generalize well.\n\nWhat I really mean here is this: I think that page split stuff is\ngoing to be much more interesting than index deletion stuff. Index\ndeletion exists to prevent page splits. So it's natural to ask\nquestions about where that seems like it ought to have happened, but\ndidn't actually happen. This likely requires bucketing page splits\ninto different categories (since most individual page splits aren't\nlike that at all). Then it becomes much easier to (say) compare\nindexes on the same table -- the user can follow a procedure that is\nlikely to generalize well to many different kinds of situations.\n\nIt's not completely clear how the bucketization would work. We ought\nto remember how many page splits were caused by INSERT statements\nrather than non-HOT UPDATEs, though -- that much seems likely to be\nvery useful and actionable. The DBA can probably consume this\ninformation in a low context way by looking at the proportions of one\nkind of split to the other at the level of each index.\n\nOne type of split is mostly just a \"cost of doing business\" for B-Tree\nindexing. The other type really isn't.\n\n> 3) Maintain more historical statistics about vacuuming\n\n> However, none of that allows the user to identify which relations are causing\n> autovacuum to not keep up. Even just keeping track of the the total time\n> autovacuum has spent on certain relations would be a significant improvement,\n> with more easily imaginable (total IO [time], autovacuum delay time, xid age).\n\nWith VACUUM in particular the picture over time can be far easier to\nwork with than any given snapshot, from any single VACUUM operation.\nFocusing on how things seem to be changing can make it a lot easier to\nspot concerning trends, especially if you're a non-expert.\n\nI would also expect a similar focus on the picture over time to be\nuseful with the indexing stuff, for roughly the same underlying\nreasons.\n\n[1] https://postgr.es/m/CAA8Fd-pB=mr42YQuoaLPO_o2=XO9YJnjQ23CYJDFwC8SXGM8zg@mail.gmail.com\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Oct 2022 12:37:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: shared memory stats ideas"
},
{
"msg_contents": "Hi,\n\nOn 10/19/22 8:19 PM, Andres Freund wrote:\n> \n> Hi,\n> \n> \n> Here's a largely unordered list of ideas. I'm not planning to work on them\n> myself, but thought it'd nevertheless be useful to have them memorialized\n> somewhere.\n> \n\nThanks for sharing this list of ideas!\n\n> \n> \n> 2) Split index and table statistics into different types of stats\n> \n> We track both types of statistics in the same format and rename column in\n> views etc to make them somewhat sensible. A number of the \"columns\" in index\n> stats are currently unused.\n> \n> If we split the stats for indexes and relations we can have reasonable names\n> for the fields, shrink the current memory usage by halfing the set of fields\n> we keep for indexes, and extend the stats in a more targeted fashion.\n\nI started to work on this.\nI should be able to provide a patch attempt in the next couple of weeks.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Oct 2022 09:17:44 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared memory stats ideas"
},
{
"msg_contents": "Thanks for the nice list.\n\nAt Wed, 19 Oct 2022 12:37:30 -0700, Peter Geoghegan <pg@bowt.ie> wrote in \n> On Wed, Oct 19, 2022 at 11:19 AM Andres Freund <andres@anarazel.de> wrote:\n> > We e.g. currently can't track the number of blocks written out in a relation,\n> > because we don't have a Relation at that point. Nor can't we really get hold\n> > of one, as the writeback can happen in a different database without access to\n> > pg_class. Which is also the reason why the per-relation IO stats aren't\n> > populated by the startup process, even though it'd obviously sometimes be\n> > helpful to know where the most IO time is spent on a standby.\n> >\n> > There's also quite a bit of contortions of the bufmgr interface related to\n> > this.\n> \n> This seems related to the difficulty with distinguishing between\n> internal pages and leaf pages (or some generalized AM-agnostic\n> definition) in views like pg_statio_*_indexes.\n>\n> Differentiating between leaf pages and internal pages would definitely\n> be a big improvement, but it's kind of an awkward thing to implement\n> [1] because you have to somehow invent the general concept of multiple\n> distinct kinds of buffers/pages within a relation. A lot of code would\n> need to be taught about that.\n> \n> This work would be more likely to actually happen if it was tied to\n> some bigger project that promised other benefits.\n\nStickier buffers for index pages seems to be related. I haven't see it\neven get started, though. But this might be able be an additional\nreason for starting it.\n\n> > 2) Split index and table statistics into different types of stats\n> \n> > This e.g. would allow us keep track of the number of index entries killed via\n> > the killtuples mechanism, which in turn would allow us to more intelligently\n> > decide whether we should vacuum indexes (often the most expensive part of\n> > vacuum). In a lot of workload killtuples takes care of most of the cleanup,\n> > but in others it doesn't do much.\n> \n> While I do agree that it would be nice to record information about the\n> number of deletion operations per index, that information will still\n> be tricky to interpret and act upon relative to other kinds of\n> information. As a general rule, we should prefer to focus on signals\n> that show things really aren't going well in some specific and\n> unambiguous way. Signals about things that are going well seem harder\n> to work with -- they don't generalize well.\n\nI think some statistics can be pure-internal purpose. We can maintain\nsome statistics hidden from users, if we want. (However, I think\npeople will request for the numbers to be revealed, finally..)\n\n> What I really mean here is this: I think that page split stuff is\n> going to be much more interesting than index deletion stuff. Index\n> deletion exists to prevent page splits. So it's natural to ask\n> questions about where that seems like it ought to have happened, but\n> didn't actually happen. This likely requires bucketing page splits\n> into different categories (since most individual page splits aren't\n> like that at all). Then it becomes much easier to (say) compare\n> indexes on the same table -- the user can follow a procedure that is\n> likely to generalize well to many different kinds of situations.\n> \n> It's not completely clear how the bucketization would work. We ought\n> to remember how many page splits were caused by INSERT statements\n> rather than non-HOT UPDATEs, though -- that much seems likely to be\n> very useful and actionable. The DBA can probably consume this\n> information in a low context way by looking at the proportions of one\n> kind of split to the other at the level of each index.\n> \n> One type of split is mostly just a \"cost of doing business\" for B-Tree\n> indexing. The other type really isn't.\n>\n> > 3) Maintain more historical statistics about vacuuming\n> \n> > However, none of that allows the user to identify which relations are causing\n> > autovacuum to not keep up. Even just keeping track of the the total time\n> > autovacuum has spent on certain relations would be a significant improvement,\n> > with more easily imaginable (total IO [time], autovacuum delay time, xid age).\n> \n> With VACUUM in particular the picture over time can be far easier to\n> work with than any given snapshot, from any single VACUUM operation.\n> Focusing on how things seem to be changing can make it a lot easier to\n> spot concerning trends, especially if you're a non-expert.\n\nAgreed. It seem like a kind of easy (low-hanging) one. I'll give it a\ntry. There should be some other numbers that timeseries stats are\nuseful.\n\n> I would also expect a similar focus on the picture over time to be\n> useful with the indexing stuff, for roughly the same underlying\n> reasons.\n> \n> [1] https://postgr.es/m/CAA8Fd-pB=mr42YQuoaLPO_o2=XO9YJnjQ23CYJDFwC8SXGM8zg@mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 21 Oct 2022 10:26:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared memory stats ideas"
},
{
"msg_contents": "Hi,\n\nOn 10/20/22 9:17 AM, Drouvot, Bertrand wrote:\n> On 10/19/22 8:19 PM, Andres Freund wrote:\n>>\n>> 2) Split index and table statistics into different types of stats\n>>\n>> We track both types of statistics in the same format and rename column in\n>> views etc to make them somewhat sensible. A number of the \"columns\" in \n>> index\n>> stats are currently unused.\n>>\n>> If we split the stats for indexes and relations we can have reasonable \n>> names\n>> for the fields, shrink the current memory usage by halfing the set of \n>> fields\n>> we keep for indexes, and extend the stats in a more targeted fashion.\n> \n> I started to work on this.\n> I should be able to provide a patch attempt in the next couple of weeks.\n\nPatch submitted and CF entry created: \nhttps://commitfest.postgresql.org/40/3984/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 14:20:09 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shared memory stats ideas"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 2:26 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Stickier buffers for index pages seems to be related. I haven't see it\n> even get started, though. But this might be able be an additional\n> reason for starting it.\n\nMaybe, but FWIW I think that that will mostly just need to distinguish\nleaf pages from heap pages (and mostly ignore internal pages). Within\neach index, internal pages are typically no more than a fraction of 1%\nof all pages. There are already so few internal pages that it seems\nvery likely that they're practically guaranteed to be cached already.\nThere is a huge asymmetry in how pages are naturally accessed, which\njustifies treating them as fundamentally different things.\n\nSeparating leaf pages from internal pages for instrumentation purposes\nis valuable because it allows the DBA to completely *ignore* internal\npages. Internal pages are accessed far far more frequently than leaf\npages. In effect, internal pages add \"noise\" to the instrumentation,\nobscuring the useful \"signal\" that the DBA should focus on (by\nconsidering leaf level hits and misses in isolation). So the value is\nfrom \"removing noise\", not from \"adding signal\".\n\nYou only need about 1% of the memory required to cache a big index to\nget a \"hit rate\" of 75% (assuming you don't have a workload that's\nvery scan heavy, which would be unusual). Obviously the standard naive\ndefinition of \"index hit rate\" isn't particularly useful.\n\n> > While I do agree that it would be nice to record information about the\n> > number of deletion operations per index, that information will still\n> > be tricky to interpret and act upon relative to other kinds of\n> > information. As a general rule, we should prefer to focus on signals\n> > that show things really aren't going well in some specific and\n> > unambiguous way. Signals about things that are going well seem harder\n> > to work with -- they don't generalize well.\n>\n> I think some statistics can be pure-internal purpose. We can maintain\n> some statistics hidden from users, if we want. (However, I think\n> people will request for the numbers to be revealed, finally..)\n\nIt will probably be easy to add information about index tuple\ndeletions, without almost no downside, so of course we should do it.\nMy point was just that it's probably not the single most informative\nthing that could be instrumented to help users to understand index\nbloat. It's just much easier to understand what's not working than\nwhat is going well. It's a stronger and more informative signal.\n\n> > With VACUUM in particular the picture over time can be far easier to\n> > work with than any given snapshot, from any single VACUUM operation.\n> > Focusing on how things seem to be changing can make it a lot easier to\n> > spot concerning trends, especially if you're a non-expert.\n>\n> Agreed. It seem like a kind of easy (low-hanging) one. I'll give it a\n> try. There should be some other numbers that timeseries stats are\n> useful.\n\nGreat!\n\nThere probably is some way that VACUUM itself will ultimately use this\ninformation to decide what to do. For example, if we go too long\nwithout doing any index vacuuming, we might want to do it despite the\nfact that there are relatively few LP_DEAD items in heap pages.\n\nI don't think that we need to worry too much about how VACUUM itself\nmight apply the same information for now, but it's something that you\nmight want to consider.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 31 Oct 2022 15:29:06 +0000",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: shared memory stats ideas"
}
] |
[
{
"msg_contents": "Hi,\n\nCreating a new thread focussed on adding docs for building Postgres with\nmeson. This is a spinoff from the original thread [1] and I've attempted to\naddress all the feedback provided there in the attached patch.\n\nPlease let me know your thoughts.\n\n[1]\nhttps://www.postgresql.org/message-id/28de92b5-a514-fe1b-1637-ba228aa2cccf%40enterprisedb.com\n\nRegards,\nSamay",
"msg_date": "Wed, 19 Oct 2022 11:35:10 -0700",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Documentation for building with meson"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 11:35:10AM -0700, samay sharma wrote:\n> Creating a new thread focussed on adding docs for building Postgres with\n> meson. This is a spinoff from the original thread [1] and I've attempted to\n> address all the feedback provided there in the attached patch.\n> \n> Please let me know your thoughts.\n\nIt's easier to review rendered documentation.\nI made a rendered copy available here:\nhttps://api.cirrus-ci.com/v1/artifact/task/6084297781149696/html_docs/html_docs/install-meson.html\n\n+ <application>Flex</application> and <application>Bison</application>\n+ are needed to build <productname>PostgreSQL</productname> using\n+ <application>meson</application>. Be sure to get\n+ <application>Flex</application> 2.5.31 or later and\n+ <application>Bison</application> 1.875 or later from your package manager.\n+ Other <application>lex</application> and <application>yacc</application>\n+ programs cannot be used.\n\nThese versions need to be updated, see also: 57bab3330:\n - b086a47a270 \"Bump minimum version of Bison to 2.3\"\n - 8b878bffa8d \"Bump minimum version of Flex to 2.5.35\"\n\n+ will be enabled automatically if the required packages are found.\n\nshould refer to files/libraries/headers/dependencies rather than\n\"packages\" ?\n\n+ default is false that is to use <application>Readline</application>.\n\n\"that is to use\" should be parenthesized or separate with commas, like\n| default is false, that is to use <application>Readline</application>.\n\nzlib is mentioned twice, the first being \"strongly recommended\".\nIs that intended? Also, basebackup can use zlib, too.\n\n+ If you have the binaries for certain by programs required to build\n\nremove \"by\" ?\n\n+ Postgres (with or without optional flags) stored at non-standard\n+ paths, you could specify them manually to meson configure. The complete\n+ list of programs for whom this is supported can be found by running\n\nfor *which\n\nActually, I suggest to say:\n|If a program required to build Postgres (with or without optional flags)\n|is stored in a non-standard path, ...\n\n+ a build with a different value of these options.\n\n.. with different values ..\n\n+ the server, it is recommended to use atleast the <option>--buildtype=debug</option>\n\nat least\n\n+ and it's options in the meson documentation.\n\nits\n\nMaybe other things should have <productname> ?\n\n Git\n Libxml2\n libxslt\n visual studio\n DTrace\n ninja\n\n+ <application>Flex</application> and <application>Bison</application>\n\nMaybe these should use <productname> ?\n\n+ be installed with <literal>pip</literal>.\n\nShould be <application> ?\n\nThis part is redundant with prior text:\n\" To use this option, you will need an implementation of the Gettext API. \"\n\n+ Enabls use of the Zlib library\n\ntypo: Enabls\n\n+ This option is set to true by default and setting it to false will \n\nchange \"and\" to \";\" for spinlocks and atomics?\n\n+ Debug info is generated but the result is not optimized. \n\nMaybe say the \"code\" is not optimized ?\n\n+ the tests can slow down the server significantly\n\nremove \"can\"\n\n+ You can override the amount of parallel processes used with\n\ns/amount/number/\n\n+ If you'd like to build with a backend other that ninja\n\nother *than\n\n+ the <acronym>GNU</acronym> C library then you will additionally\n\nlibrary comma\n\n+ argument. If no <literal>srcdir</literal> is given Meson will deduce the\n\ngiven comma\n\n+ It should be noted that after the initial configure step\n\nstep comma\n\n+ After the installation you can free disk space by removing the built\n\ninstallation comma\n\n+ To learn more about running the tests and how to interpret the results\n+ you can refer to the documentation for interpreting test results.\n\ninterpret the results comma\n\"for interpreting test results\" seems redundant.\n\n+ ninja install should work for most cases but if you'd like to use more options\n\ncases comma\n\nStarting with \"Developer Options\", this intermixes postgres\nproject-specific options like cassert and tap-tests with meson's stuff\nlike buildtype and werror. IMO there's too much detail about meson's\noptions, which I think is better left to that project's own\ndocumentation, and postgres docs should include only a brief mention and\na reference to their docs.\n\n+ Ninja will automatically detect the number of CPUs in your computer and\n+ parallelize itself accordingly. You can override the amount of parallel\n+ processes used with the command line argument -j. \n\ntoo much detail for my taste..\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 19 Oct 2022 21:43:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Wed, Oct 19, 2022 at 7:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Oct 19, 2022 at 11:35:10AM -0700, samay sharma wrote:\n> > Creating a new thread focussed on adding docs for building Postgres with\n> > meson. This is a spinoff from the original thread [1] and I've attempted\n> to\n> > address all the feedback provided there in the attached patch.\n> >\n> > Please let me know your thoughts.\n>\n> It's easier to review rendered documentation.\n> I made a rendered copy available here:\n>\n> https://api.cirrus-ci.com/v1/artifact/task/6084297781149696/html_docs/html_docs/install-meson.html\n\n\nThanks for your for review. Attached v2 of the patch here.\n\n\n>\n>\n> + <application>Flex</application> and <application>Bison</application>\n> + are needed to build <productname>PostgreSQL</productname> using\n> + <application>meson</application>. Be sure to get\n> + <application>Flex</application> 2.5.31 or later and\n> + <application>Bison</application> 1.875 or later from your package\n> manager.\n> + Other <application>lex</application> and\n> <application>yacc</application>\n> + programs cannot be used.\n>\n> These versions need to be updated, see also: 57bab3330:\n> - b086a47a270 \"Bump minimum version of Bison to 2.3\"\n> - 8b878bffa8d \"Bump minimum version of Flex to 2.5.35\"\n>\n\nChanged\n\n\n>\n> + will be enabled automatically if the required packages are found.\n>\n> should refer to files/libraries/headers/dependencies rather than\n> \"packages\" ?\n>\n\nChanged to dependencies\n\n\n>\n> + default is false that is to use\n> <application>Readline</application>.\n>\n> \"that is to use\" should be parenthesized or separate with commas, like\n> | default is false, that is to use <application>Readline</application>.\n>\n> zlib is mentioned twice, the first being \"strongly recommended\".\n> Is that intended? Also, basebackup can use zlib, too.\n>\n\nYes, the first is in the requirements section where we just list packages\nrequired / recommended. The other mention is in the list of configure\noptions. This is similar to how the documentation looks today for make /\nautoconf. Added pg_basebackup as a use case too.\n\n\n>\n> + If you have the binaries for certain by programs required to\n> build\n>\n> remove \"by\" ?\n>\n\nDone\n\n\n>\n> + Postgres (with or without optional flags) stored at non-standard\n> + paths, you could specify them manually to meson configure. The\n> complete\n> + list of programs for whom this is supported can be found by\n> running\n>\n> for *which\n>\n> Actually, I suggest to say:\n> |If a program required to build Postgres (with or without optional flags)\n> |is stored in a non-standard path, ...\n>\n\nLiked this framing better. Changed.\n\n>\n> + a build with a different value of these options.\n>\n> .. with different values ..\n>\n\nDone\n\n>\n> + the server, it is recommended to use atleast the\n> <option>--buildtype=debug</option>\n>\n> at least\n>\nDone\n\n>\n> + and it's options in the meson documentation.\n>\n> its\n>\nDone\n\n>\n> Maybe other things should have <productname> ?\n>\n> Git\n> Libxml2\n> libxslt\n> visual studio\n> DTrace\n> ninja\n>\n> + <application>Flex</application> and <application>Bison</application>\n>\n> Maybe these should use <productname> ?\n>\n\nI'm unsure of the right protocol for this. I tried to follow the precedent\nset in the make / autoconf part of the documentation, which uses\n<productname> at certain places and <application> at others. Is there a\nreference or guidance on which to use where or is it mostly a case by case\ndecision?\n\n\n> + be installed with <literal>pip</literal>.\n>\n> Should be <application> ?\n>\n\nChanged.\n\n>\n> This part is redundant with prior text:\n> \" To use this option, you will need an implementation of the Gettext API. \"\n>\n\nModified.\n\n>\n> + Enabls use of the Zlib library\n>\n> typo: Enabls\n>\n\nFixed.\n\n>\n> + This option is set to true by default and setting it to false will\n>\n> change \"and\" to \";\" for spinlocks and atomics?\n>\n\nDone\n\n>\n> + Debug info is generated but the result is not optimized.\n>\n> Maybe say the \"code\" is not optimized ?\n>\n\nChanged\n\n>\n> + the tests can slow down the server significantly\n>\n> remove \"can\"\n>\n\nDone.\n\n\n>\n> + You can override the amount of parallel processes used with\n>\n> s/amount/number/\n>\n\nDone\n\n>\n> + If you'd like to build with a backend other that ninja\n>\n> other *than\n>\n\nFixed.\n\n>\n> + the <acronym>GNU</acronym> C library then you will additionally\n>\n> library comma\n>\n\nAdded\n\n>\n> + argument. If no <literal>srcdir</literal> is given Meson will deduce\n> the\n>\n> given comma\n>\n\nAdded\n\n>\n> + It should be noted that after the initial configure step\n>\n> step comma\n>\n\nAdded\n\n>\n> + After the installation you can free disk space by removing the built\n>\n> installation comma\n>\n\nAdded\n\n>\n> + To learn more about running the tests and how to interpret the results\n> + you can refer to the documentation for interpreting test results.\n>\n> interpret the results comma\n> \"for interpreting test results\" seems redundant.\n>\n\nChanged.\n\n>\n> + ninja install should work for most cases but if you'd like to use more\n> options\n>\n> cases comma\n>\n\nAdded\n\n>\n> Starting with \"Developer Options\", this intermixes postgres\n> project-specific options like cassert and tap-tests with meson's stuff\n> like buildtype and werror. IMO there's too much detail about meson's\n> options, which I think is better left to that project's own\n> documentation, and postgres docs should include only a brief mention and\n> a reference to their docs.\n>\n\nThe meson specific options I've chosen to document are: auto_features,\nbackend, c_args, c_link_args, buildtype, optimization, werror,\nerrorlogs and b_coverage as I felt they might be used often and are useful\nto know. But, it's very possible that some of them might be obvious and\nothers may not be as useful as I thought. Are there specific ones you'd\nsuggest we can remove? Also, if you're curious, this is the list I picked\nfrom: https://mesonbuild.com/Commands.html#configure.\n\nIn terms of detail about individual options, I think the descriptions about\nmost of them are brief but buildtype was pretty verbose. I have shortened\nit.\n\n\n>\n> + Ninja will automatically detect the number of CPUs in your computer and\n> + parallelize itself accordingly. You can override the amount of parallel\n> + processes used with the command line argument -j.\n>\n> too much detail for my taste..\n>\n\nI added this as make / autoconf doesn't do something like this. So, it\nmight be useful to know for people switching over.\n\nRegards,\nSamay\n\n\n>\n> --\n> Justin\n>",
"msg_date": "Wed, 26 Oct 2022 12:23:32 -0700",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "+# Run the main pg_regress and isolation tests\n+<userinput>meson test --suite main</userinput>\n\nThis does not work for me in a fresh install until running\n\nmeson test --suite setup\n\nIn fact, we see in\n\nhttps://wiki.postgresql.org/wiki/Meson\n\nmeson test --suite setup --suite main\n\nThat was just an eyeball check from a naive user -- it would be good to try\nrunning everything documented here.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n+# Run the main pg_regress and isolation tests+<userinput>meson test --suite main</userinput>This does not work for me in a fresh install until runningmeson test --suite setupIn fact, we see in https://wiki.postgresql.org/wiki/Mesonmeson test --suite setup --suite mainThat was just an eyeball check from a naive user -- it would be good to try running everything documented here.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 Oct 2022 15:04:32 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 1:04 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> This does not work for me in a fresh install until running\n>\n> meson test --suite setup\n>\n> In fact, we see in\n>\n> https://wiki.postgresql.org/wiki/Meson\n>\n> meson test --suite setup --suite main\n\n(Is there a way to declare a dependency on the setup suite in Meson,\nso that we don't have to specify it manually? I was bitten by this\nrecently; if you make a code change and forget to run setup, it'll\nrecompile locally but then skip reinstallation, giving false test\nresults.)\n\n--Jacob\n\n\n",
"msg_date": "Thu, 27 Oct 2022 14:15:32 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-27 14:15:32 -0700, Jacob Champion wrote:\n> On Thu, Oct 27, 2022 at 1:04 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > This does not work for me in a fresh install until running\n> >\n> > meson test --suite setup\n> >\n> > In fact, we see in\n> >\n> > https://wiki.postgresql.org/wiki/Meson\n> >\n> > meson test --suite setup --suite main\n> \n> (Is there a way to declare a dependency on the setup suite in Meson,\n> so that we don't have to specify it manually? I was bitten by this\n> recently; if you make a code change and forget to run setup, it'll\n> recompile locally but then skip reinstallation, giving false test\n> results.)\n\nTests can have dependencies, and they're correctly built. The problem however\nis that, for historical reasons if I understand correctly, dependencies of\ntests are automatically included in the default 'all' target. Which means if\nyou just type in 'ninja', it'd automatically create the test installation -\nwhich is probably not what we want, given that that's not a fast step on some\nplatforms.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 27 Oct 2022 16:03:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 4:03 PM Andres Freund <andres@anarazel.de> wrote:\n> Tests can have dependencies, and they're correctly built. The problem however\n> is that, for historical reasons if I understand correctly, dependencies of\n> tests are automatically included in the default 'all' target. Which means if\n> you just type in 'ninja', it'd automatically create the test installation -\n> which is probably not what we want, given that that's not a fast step on some\n> platforms.\n\nAnd I see that between-suite dependencies were rejected as a feature\n[1]. Ah well, `--suite setup` is not so bad once you learn it.\n\nThanks!\n--Jacob\n\n[1] https://github.com/mesonbuild/meson/issues/2740\n\n\n",
"msg_date": "Fri, 28 Oct 2022 08:43:57 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Thu, Oct 27, 2022 at 1:04 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> +# Run the main pg_regress and isolation tests\n> +<userinput>meson test --suite main</userinput>\n>\n> This does not work for me in a fresh install until running\n>\n> meson test --suite setup\n>\n> In fact, we see in\n>\n> https://wiki.postgresql.org/wiki/Meson\n>\n> meson test --suite setup --suite main\n>\n\nYou are right that this will be needed for a new install. I've added\n--suite setup in the testing section in the v3 of the patch (attached).\n\n\n> That was just an eyeball check from a naive user -- it would be good to\n> try running everything documented here.\n>\n\nI retried all the instructions as suggested and they work for me.\n\nRegards,\nSamay\n\n\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>",
"msg_date": "Sun, 30 Oct 2022 20:51:59 -0700",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-30 20:51:59 -0700, samay sharma wrote:\n> +# setup and enter build directory (done only first time)\n> +meson setup build src --prefix=$PWD/install\n\nThis command won't work on windows, I think.\n\n\n> + <sect2 id=\"configure-meson\">\n> + <title>Configuring the build</title>\n> +\n> + <para>\n> + The first step of the installation procedure is to configure the\n> + source tree for your system and choose the options you would like. To\n\ns/source tree/build tree/?\n\n\n> + create and configure the build directory, you can start with the\n> + <literal>meson setup</literal> command.\n> + </para>\n> +\n> +<screen>\n> +<userinput>meson setup build</userinput>\n> +</screen>\n> +\n> + <para>\n> + The setup command takes a <literal>builddir</literal> and a <literal>srcdir</literal>\n> + argument. If no <literal>srcdir</literal> is given, Meson will deduce the\n> + <literal>srcdir</literal> based on the current directory and the location\n> + of <literal>meson.build</literal>. The <literal>builddir</literal> is mandatory.\n> + </para>\n> +\n> + <para>\n> + Meson then loads the build configuration file and sets up the build directory.\n> + Additionally, the invocation can pass options to Meson. The list of commonly\n> + used options is in subsequent sections. A few examples of specifying different\n> + build options are:\n\nSomehow the \"tone\" is descriptive in a distanced way, rather than instructing\nwhat to do.\n\n\n> + <sect3 id=\"configure-install-locations\">\n> + <title>Installation Locations</title>\n> +\n> + <para>\n> + These options control where <literal>ninja install (or meson install)</literal> will put\n> + the files. The <option>--prefix</option> option is sufficient for\n> + most cases.\n\nPerhaps the short version use of prefix could be a link here? Not sure if\nthat's a good idea.\n\n\n\n> + <variablelist>\n> + <varlistentry>\n> + <term><option>--prefix=<replaceable>PREFIX</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Install all files under the directory <replaceable>PREFIX</replaceable>\n> + instead of <filename>/usr/local/pgsql</filename>. The actual\n> + files will be installed into various subdirectories; no files\n> + will ever be installed directly into the\n> + <replaceable>PREFIX</replaceable> directory.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nHm, need to mention windows here likely. By default the installation will go\nto <current drive letter>:/usr/local/pgsql.\n\n\n> + <varlistentry>\n> + <term><option>--bindir=<replaceable>DIRECTORY</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Specifies the directory for executable programs. The default\n> + is <filename><replaceable>PREFIX</replaceable>/bin</filename>, which\n> + normally means <filename>/usr/local/pgsql/bin</filename>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nHm, do we really want the \"which normally means\" part? That'll make the OS\nstuff more complicated.\n\n\n> + <varlistentry>\n> + <term><option>--sysconfdir=<replaceable>DIRECTORY</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Sets the directory for various configuration files,\n> + <filename><replaceable>PREFIX</replaceable>/etc</filename> by default.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nNeed to check what windows does here.\n\n\n> + <varlistentry>\n> + <term><option>-Dnls=<replaceable>auto/enabled/disabled</replaceable></option></term>\n\nWonder if we should define a entity for\n<replaceable>auto/enabled/disabled</replaceable>? There's a lot of repetitions\nof it.\n\n\n> + <listitem>\n> + <para>\n> + Enables or disables Native Language Support (<acronym>NLS</acronym>),\n> + that is, the ability to display a program's messages in a\n> + language other than English. It defaults to auto, meaning that it\n> + will be enabled automatically if an implementation of the\n> + <application>Gettext API</application> is found.\n> + </para>\n\nDo we really want to repeat the \"It defaults to auto, meaning that it will be\nenabled automatically if ...\" for each of these? Perhaps we could say\n'defaults to <xref ...>auto</xref>'?\n\n\n> + <para>\n> + By default,\n> + <productname>pkg-config</productname><indexterm><primary>pkg-config</primary></indexterm>\n> + will be used to find the required compilation options. This is\n> + supported for <productname>ICU4C</productname> version 4.6 and later.\n> + <!-- Add description for older ICU4C versions and when pkg-config isn't available-->\n> + </para>\n\nI'd just remove this paragraph. Right now the meson build will just use solely\nuse pkg-config config files for icu. I don't think we need to care about 4.6\n(from 2010) anymore.\n\n\n> + <varlistentry id=\"configure-with-llvm-meson\">\n> + <term><option>-Dllvm=<replaceable>auto/enabled/disabled</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Build with support for <productname>LLVM</productname> based\n> + <acronym>JIT</acronym> compilation<phrase\n> + condition=\"standalone-ignore\"> (see <xref\n> + linkend=\"jit\"/>)</phrase>. This\n> + requires the <productname>LLVM</productname> library to be installed.\n> + The minimum required version of <productname>LLVM</productname> is\n> + currently 3.9. It is set to disabled by default.\n> + </para>\n> + <para>\n> + <command>llvm-config</command><indexterm><primary>llvm-config</primary></indexterm>\n> + will be used to find the required compilation options.\n> + <command>llvm-config</command>, and then\n> + <command>llvm-config-$major-$minor</command> for all supported\n> + versions, will be searched for in your <envar>PATH</envar>.\n> + <!--Add substitute fo LLVM_CONFIG when llvm-config is not in PATH-->\n> + </para>\n\nProbably a link to the docs for meson native files suffices here for\nnow. Since the autoconf docs have been written there's only\nllvm-config-$version, llvm stopped having separate major/minor versions\nsomewhere around llvm 4. I think it'd suffice to say llvm-config-$version?\n\n\n> + <para>\n> + <productname>LLVM</productname> support requires a compatible\n> + <command>clang</command> compiler (specified, if necessary, using the\n> + <envar>CLANG</envar> environment variable), and a working C++\n> + compiler (specified, if necessary, using the <envar>CXX</envar>\n> + environment variable).\n> + </para>\n\nFor clang we don't look for CLANG anymore, we use for the clang compiler\nbelonging to the llvm installation of llvm-config.\n\n\n> + <listitem>\n> + <para>\n> + Build with support for <acronym>SSL</acronym> (encrypted)\n> + connections. The only <replaceable>LIBRARY</replaceable>\n> + supported is <option>openssl</option>. This requires the\n> + <productname>OpenSSL</productname> package to be installed.\n> + <filename>configure</filename> will check for the required\n\nThe <filename>configure</filename> reference is out of date.\n\n> + header files and libraries to make sure that your\n> + <productname>OpenSSL</productname> installation is sufficient\n> + before proceeding. The default for this option is none.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n\n> + <varlistentry>\n> + <term><option>-Dgssapi=<replaceable>auto/enabled/disabled</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Build with support for GSSAPI authentication. On many systems, the\n> + GSSAPI system (usually a part of the Kerberos installation) is not\n> + installed in a location\n> + that is searched by default (e.g., <filename>/usr/include</filename>,\n> + <filename>/usr/lib</filename>), so you must use the options\n> + <option>-Dextra_include_dirs</option> and <option>-Dextra_lib_dirs</option> in\n> + addition to this option. <filename>meson configure</filename> will check\n> + for the required header files and libraries to make sure that\n> + your GSSAPI installation is sufficient before proceeding.\n> + It defaults to auto, meaning that it will be enabled automatically if the\n> + required dependencies are found.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nRight now we only work for gssapi installations providing a pkg-config config\nfile. We could change that if we encounter a system where that's insufficient.\n\n\n> + <varlistentry>\n> + <term><option>-Duuid=<replaceable>LIBRARY</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Build the <xref linkend=\"uuid-ossp\"/> module\n> + (which provides functions to generate UUIDs), using the specified\n> + UUID library.<indexterm><primary>UUID</primary></indexterm>\n> + <replaceable>LIBRARY</replaceable> must be one of:\n> + </para>\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <option>none</option> to not build the ussp module. This is the default.\n> + </para>\n> + </listitem>\n\ns/ussp/uuid/?\n\n\n\n> + <para>\n> + To detect the required compiler and linker options, PostgreSQL will\n> + query <command>pkg-config</command>, if that is installed and knows\n> + about libxml2. Otherwise the program <command>xml2-config</command>,\n> + which is installed by libxml2, will be used if it is found. Use\n> + of <command>pkg-config</command> is preferred, because it can deal\n> + with multi-architecture installations better.\n> + </para>\n\nRight now only pkg-config is supported with meson.\n\n\n> +\n> + <varlistentry>\n> + <term><option>-Dspinlocks=<replaceable>true/false</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + This option is set to true by default; setting it to false will\n> + allow the build to succeed even if <productname>PostgreSQL</productname>\n> + has no CPU spinlock support for the platform. The lack of\n> + spinlock support will result in very poor performance; therefore,\n> + this option should only be changed if the build aborts and\n> + informs you that the platform lacks spinlock support. If setting this\n> + option to false is required to build <productname>PostgreSQL</productname> on\n> + your platform, please report the problem to the\n> + <productname>PostgreSQL</productname> developers.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><option>-Datomics=<replaceable>true/false</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + This option is set to true by default; setting it to false will\n> + disable use of CPU atomic operations. The option does nothing on\n> + platforms that lack such operations. On platforms that do have\n> + them, disabling atomics will result in poor performance. Changing\n> + this option is only useful for debugging or making performance comparisons.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> + </variablelist>\n\nI think these should rather be in the developer section? They're not\ndependencies and as you noted, they're not normally useful.\n\n\n> + <varlistentry>\n> + <term><option>-Dextra_include_dirs=<replaceable>DIRECTORIES</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + <replaceable>DIRECTORIES</replaceable> is a colon-separated list of\n> + directories that will be added to the list the compiler\n> + searches for header files. If you have optional packages\n> + (such as GNU <application>Readline</application>) installed in a non-standard\n> + location,\n> + you have to use this option and probably also the corresponding\n> + <option>-Dextra_lib_dirs</option> option.\n> + </para>\n> + <para>\n> + Example: <literal>-Dextra_include_dirs=/opt/gnu/include:/usr/sup/include</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nThe separator for meson is a comma, rather than a :\n\n> + <varlistentry>\n> + <term><option>-Dextra_lib_dirs=<replaceable>DIRECTORIES</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + <replaceable>DIRECTORIES</replaceable> is a colon-separated list of\n> + directories to search for libraries. You will probably have\n> + to use this option (and the corresponding\n> + <option>-Dextra_include_dirs</option> option) if you have packages\n> + installed in non-standard locations.\n> + </para>\n> + <para>\n> + Example: <literal>-Dextra_lib_dirs=/opt/gnu/lib:/usr/sup/lib</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nDito.\n\n\n> + <variablelist>\n> +\n> + <varlistentry>\n> + <term><option>-Dsegsize=<replaceable>SEGSIZE</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Set the <firstterm>segment size</firstterm>, in gigabytes. Large tables are\n> + divided into multiple operating-system files, each of size equal\n> + to the segment size. This avoids problems with file size limits\n> + that exist on many platforms. The default segment size, 1 gigabyte,\n> + is safe on all supported platforms. If your operating system has\n> + <quote>largefile</quote> support (which most do, nowadays), you can use\n> + a larger segment size. This can be helpful to reduce the number of\n> + file descriptors consumed when working with very large tables.\n> + But be careful not to select a value larger than is supported\n> + by your platform and the file systems you intend to use. Other\n> + tools you might wish to use, such as <application>tar</application>, could\n> + also set limits on the usable file size.\n> + It is recommended, though not absolutely required, that this value\n> + be a power of 2.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><option>-Dblocksize=<replaceable>BLOCKSIZE</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Set the <firstterm>block size</firstterm>, in kilobytes. This is the unit\n> + of storage and I/O within tables. The default, 8 kilobytes,\n> + is suitable for most situations; but other values may be useful\n> + in special cases.\n> + The value must be a power of 2 between 1 and 32 (kilobytes).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><option>-Dwal_blocksize=<replaceable>BLOCKSIZE</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Set the <firstterm>WAL block size</firstterm>, in kilobytes. This is the unit\n> + of storage and I/O within the WAL log. The default, 8 kilobytes,\n> + is suitable for most situations; but other values may be useful\n> + in special cases.\n> + The value must be a power of 2 between 1 and 64 (kilobytes).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + </variablelist>\n\nThe order of the list entries seems a bit random? Perhaps just go for\nalphabetical?\n\n\n> + </sect3>\n> +\n> + <sect3 id=\"configure-devel\">\n> + <title>Developer Options</title>\n> +\n> + <para>\n> + Most of the options in this section are only of interest for\n> + developing or debugging <productname>PostgreSQL</productname>.\n> + They are not recommended for production builds, except\n> + for <option>--debug</option>, which can be useful to enable\n> + detailed bug reports in the unlucky event that you encounter a bug.\n> + On platforms supporting DTrace, <option>-Ddtrace</option>\n> + may also be reasonable to use in production.\n> + </para>\n> +\n> + <para>\n> + When building an installation that will be used to develop code inside\n> + the server, it is recommended to use at least the <option>--buildtype=debug</option>\n> + and <option>-Dcassert</option> options.\n> + </para>\n> +\n> + <variablelist>\n> + <varlistentry>\n> + <term><option>--buildtype=<replaceable>BUILDTYPE</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + This option can be used to specify the buildtype to use; defaults\n> + to release. If you'd like finer control on the debug symbols\n> + and optimization levels than what this option provides, you can\n> + refer to the --debug and --optimization flags.\n> +\n> + The following build types are generally used: plain, debug, debugoptimized\n> + and release. More information about them can be found in the\n> + <ulink url=\"https://mesonbuild.com/Running-Meson.html#configuring-the-build-directory\">\n> + meson documentation</ulink>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><option>--debug</option></term>\n> + <listitem>\n> + <para>\n> + Compiles all programs and libraries with debugging symbols.\n> + This means that you can run the programs in a debugger\n> + to analyze problems. This enlarges the size of the installed\n> + executables considerably, and on non-GCC compilers it usually\n> + also disables compiler optimization, causing slowdowns. However,\n> + having the symbols available is extremely helpful for dealing\n> + with any problems that might arise. Currently, this option is\n> + recommended for production installations only if you use GCC.\n> + But you should always have it on if you are doing development work\n> + or running a beta version.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> + <varlistentry>\n> + <term><option>--optimization</option>=<replaceable>LEVEL</replaceable></term>\n> + <listitem>\n> + <para>\n> + Specify the optimization level. LEVEL can be set to any of {0,g,1,2,3,s}.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nWonder if we should just document optimization and debug, rather than\nbuildtype. The fact that debug/optimization override buildtype is a bit\nconfusing.\n\n\n> + <varlistentry>\n> + <term><option>-Dtap-tests</option></term>\n> + <listitem>\n> + <para>\n> + Enable tests using the Perl TAP tools. This requires a Perl\n> + installation and the Perl module <literal>IPC::Run</literal>.\n> + <phrase condition=\"standalone-ignore\">See <xref linkend=\"regress-tap\"/> for more information.</phrase>\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nThis is an auto option as well.\n\n\n> + <varlistentry>\n> + <term><option>--errorlogs</option></term>\n> + <listitem>\n> + <para>\n> + This option can be used to print the logs from the failing tests\n> + making debugging easier.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI don't think it's worth documenting this, it defaults to true anyway.\n\n\n> + <para>\n> + To point to the <command>dtrace</command> program, the\n> + environment variable <envar>DTRACE</envar> can be set. This\n> + will often be necessary because <command>dtrace</command> is\n> + typically installed under <filename>/usr/sbin</filename>,\n> + which might not be in your <envar>PATH</envar>.\n> + </para>\n\nWe don't read the DTRACE environment variable, but the DTRACE option.\n\n\n\n> + <para>\n> + <literal>ninja install</literal> should work for most cases,\n> + but if you'd like to use more options, you could also use\n> + <literal>meson install</literal> instead. You can learn more about\n> + <ulink url=\"https://mesonbuild.com/Commands.html#install\">meson install</ulink>\n> + and its options in the meson documentation.\n> + </para>\n\nMaybe we should mention meson --quiet here? The verbosity of ninja install is\na bit annoying.\n\n\n> +# Run the main pg_regress and isolation tests\n> +<userinput>meson test --suite setup --suite main</userinput>\n\nSince yesterday the main suite is no more. There's 'regress' and 'isolation'\nnow.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Nov 2022 14:39:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Sat, Nov 5, 2022 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-10-30 20:51:59 -0700, samay sharma wrote:\n> > +# setup and enter build directory (done only first time)\n> > +meson setup build src --prefix=$PWD/install\n>\n> This command won't work on windows, I think.\n>\n\nI'll submit another version after testing on windows and seeing what else\nwe need to fix. I've addressed all the other feedback in the attached v4.\n\n>\n>\n> > + <sect2 id=\"configure-meson\">\n> > + <title>Configuring the build</title>\n> > +\n> > + <para>\n> > + The first step of the installation procedure is to configure the\n> > + source tree for your system and choose the options you would like.\n> To\n>\n> s/source tree/build tree/?\n>\n> Done\n\n>\n> > + create and configure the build directory, you can start with the\n> > + <literal>meson setup</literal> command.\n> > + </para>\n> > +\n> > +<screen>\n> > +<userinput>meson setup build</userinput>\n> > +</screen>\n> > +\n> > + <para>\n> > + The setup command takes a <literal>builddir</literal> and a\n> <literal>srcdir</literal>\n> > + argument. If no <literal>srcdir</literal> is given, Meson will\n> deduce the\n> > + <literal>srcdir</literal> based on the current directory and the\n> location\n> > + of <literal>meson.build</literal>. The <literal>builddir</literal>\n> is mandatory.\n> > + </para>\n> > +\n> > + <para>\n> > + Meson then loads the build configuration file and sets up the build\n> directory.\n> > + Additionally, the invocation can pass options to Meson. The list of\n> commonly\n> > + used options is in subsequent sections. A few examples of\n> specifying different\n> > + build options are:\n>\n> Somehow the \"tone\" is descriptive in a distanced way, rather than\n> instructing\n> what to do.\n>\n\nThis was mostly copy pasted from meson docs. Rewrote it to make briefer and\nchanged the tone to be more conversational.\n\n>\n>\n> > + <sect3 id=\"configure-install-locations\">\n> > + <title>Installation Locations</title>\n> > +\n> > + <para>\n> > + These options control where <literal>ninja install (or meson\n> install)</literal> will put\n> > + the files. The <option>--prefix</option> option is sufficient for\n> > + most cases.\n>\n> Perhaps the short version use of prefix could be a link here? Not sure if\n> that's a good idea.\n>\n\nAdded as an example\n\n>\n>\n>\n> > + <variablelist>\n> > + <varlistentry>\n> > +\n> <term><option>--prefix=<replaceable>PREFIX</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Install all files under the directory\n> <replaceable>PREFIX</replaceable>\n> > + instead of <filename>/usr/local/pgsql</filename>. The actual\n> > + files will be installed into various subdirectories; no files\n> > + will ever be installed directly into the\n> > + <replaceable>PREFIX</replaceable> directory.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Hm, need to mention windows here likely. By default the installation will\n> go\n> to <current drive letter>:/usr/local/pgsql.\n>\n>\n> > + <varlistentry>\n> > +\n> <term><option>--bindir=<replaceable>DIRECTORY</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Specifies the directory for executable programs. The default\n> > + is <filename><replaceable>PREFIX</replaceable>/bin</filename>,\n> which\n> > + normally means <filename>/usr/local/pgsql/bin</filename>.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Hm, do we really want the \"which normally means\" part? That'll make the OS\n> stuff more complicated.\n>\n\nRemoved. We mention what the default is in the description of PREFIX, so it\nshouldn't be needed anyway.\n\n>\n>\n> > + <varlistentry>\n> > +\n> <term><option>--sysconfdir=<replaceable>DIRECTORY</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Sets the directory for various configuration files,\n> > + <filename><replaceable>PREFIX</replaceable>/etc</filename> by\n> default.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Need to check what windows does here.\n>\n>\n> > + <varlistentry>\n> > +\n> <term><option>-Dnls=<replaceable>auto/enabled/disabled</replaceable></option></term>\n>\n> Wonder if we should define a entity for\n> <replaceable>auto/enabled/disabled</replaceable>? There's a lot of\n> repetitions\n> of it.\n>\n\nI couldn't come up with a good entity name which is significantly shorter.\nI think it's probably fine to have this as it clearly tells you the\npossible values you can set it to. I'll remove repetitive descriptions of\nwhat they mean.\n\n>\n>\n> > + <listitem>\n> > + <para>\n> > + Enables or disables Native Language Support\n> (<acronym>NLS</acronym>),\n> > + that is, the ability to display a program's messages in a\n> > + language other than English. It defaults to auto, meaning that\n> it\n> > + will be enabled automatically if an implementation of the\n> > + <application>Gettext API</application> is found.\n> > + </para>\n>\n> Do we really want to repeat the \"It defaults to auto, meaning that it will\n> be\n> enabled automatically if ...\" for each of these? Perhaps we could say\n> 'defaults to <xref ...>auto</xref>'?\n>\n\nI added a description to the beginning of the Postgres features section and\nremoved the repetitive \"enabled automatically if ....\".\n\n>\n>\n> > + <para>\n> > + By default,\n> > +\n> <productname>pkg-config</productname><indexterm><primary>pkg-config</primary></indexterm>\n> > + will be used to find the required compilation options. This is\n> > + supported for <productname>ICU4C</productname> version 4.6 and\n> later.\n> > + <!-- Add description for older ICU4C versions and when\n> pkg-config isn't available-->\n> > + </para>\n>\n> I'd just remove this paragraph. Right now the meson build will just use\n> solely\n> use pkg-config config files for icu. I don't think we need to care about\n> 4.6\n> (from 2010) anymore.\n>\n\nRemoved\n\n>\n>\n> > + <varlistentry id=\"configure-with-llvm-meson\">\n> > +\n> <term><option>-Dllvm=<replaceable>auto/enabled/disabled</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Build with support for <productname>LLVM</productname> based\n> > + <acronym>JIT</acronym> compilation<phrase\n> > + condition=\"standalone-ignore\"> (see <xref\n> > + linkend=\"jit\"/>)</phrase>. This\n> > + requires the <productname>LLVM</productname> library to be\n> installed.\n> > + The minimum required version of\n> <productname>LLVM</productname> is\n> > + currently 3.9. It is set to disabled by default.\n> > + </para>\n> > + <para>\n> > +\n> <command>llvm-config</command><indexterm><primary>llvm-config</primary></indexterm>\n> > + will be used to find the required compilation options.\n> > + <command>llvm-config</command>, and then\n> > + <command>llvm-config-$major-$minor</command> for all supported\n> > + versions, will be searched for in your <envar>PATH</envar>.\n> > + <!--Add substitute fo LLVM_CONFIG when llvm-config is not in\n> PATH-->\n> > + </para>\n>\n> Probably a link to the docs for meson native files suffices here for\n> now. Since the autoconf docs have been written there's only\n> llvm-config-$version, llvm stopped having separate major/minor versions\n> somewhere around llvm 4. I think it'd suffice to say llvm-config-$version?\n>\n\nLLVM_CONFIG is now supported by newer versions of meson\nhttps://github.com/mesonbuild/meson/pull/10757. So, will just ask users to\nuse that?\n\nChanged to llvm-config-$version\n\n>\n>\n> > + <para>\n> > + <productname>LLVM</productname> support requires a compatible\n> > + <command>clang</command> compiler (specified, if necessary,\n> using the\n> > + <envar>CLANG</envar> environment variable), and a working C++\n> > + compiler (specified, if necessary, using the <envar>CXX</envar>\n> > + environment variable).\n> > + </para>\n>\n> For clang we don't look for CLANG anymore, we use for the clang compiler\n> belonging to the llvm installation of llvm-config.\n>\n\nRemoved the paragraph.\n\n>\n>\n> > + <listitem>\n> > + <para>\n> > + Build with support for <acronym>SSL</acronym> (encrypted)\n> > + connections. The only <replaceable>LIBRARY</replaceable>\n> > + supported is <option>openssl</option>. This requires the\n> > + <productname>OpenSSL</productname> package to be installed.\n> > + <filename>configure</filename> will check for the required\n>\n> The <filename>configure</filename> reference is out of date.\n>\n\nRemoved.\n\n>\n> > + header files and libraries to make sure that your\n> > + <productname>OpenSSL</productname> installation is sufficient\n> > + before proceeding. The default for this option is none.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> >\n>\n> > + <varlistentry>\n> > +\n> <term><option>-Dgssapi=<replaceable>auto/enabled/disabled</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Build with support for GSSAPI authentication. On many systems,\n> the\n> > + GSSAPI system (usually a part of the Kerberos installation) is\n> not\n> > + installed in a location\n> > + that is searched by default (e.g.,\n> <filename>/usr/include</filename>,\n> > + <filename>/usr/lib</filename>), so you must use the options\n> > + <option>-Dextra_include_dirs</option> and\n> <option>-Dextra_lib_dirs</option> in\n> > + addition to this option. <filename>meson configure</filename>\n> will check\n> > + for the required header files and libraries to make sure that\n> > + your GSSAPI installation is sufficient before proceeding.\n> > + It defaults to auto, meaning that it will be enabled\n> automatically if the\n> > + required dependencies are found.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Right now we only work for gssapi installations providing a pkg-config\n> config\n> file. We could change that if we encounter a system where that's\n> insufficient.\n>\n\nChanged to use pkg-config for non-standard paths.\n\n>\n>\n> > + <varlistentry>\n> > +\n> <term><option>-Duuid=<replaceable>LIBRARY</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Build the <xref linkend=\"uuid-ossp\"/> module\n> > + (which provides functions to generate UUIDs), using the\n> specified\n> > + UUID library.<indexterm><primary>UUID</primary></indexterm>\n> > + <replaceable>LIBRARY</replaceable> must be one of:\n> > + </para>\n> > + <itemizedlist>\n> > + <listitem>\n> > + <para>\n> > + <option>none</option> to not build the ussp module. This is\n> the default.\n> > + </para>\n> > + </listitem>\n>\n> s/ussp/uuid/?\n>\n\nChanged.\n\n>\n>\n>\n> > + <para>\n> > + To detect the required compiler and linker options, PostgreSQL\n> will\n> > + query <command>pkg-config</command>, if that is installed and\n> knows\n> > + about libxml2. Otherwise the program\n> <command>xml2-config</command>,\n> > + which is installed by libxml2, will be used if it is found.\n> Use\n> > + of <command>pkg-config</command> is preferred, because it can\n> deal\n> > + with multi-architecture installations better.\n> > + </para>\n>\n> Right now only pkg-config is supported with meson.\n>\n\nRemoved the paragraph and only left \" To use a libxml2 installation that is\nin an unusual location, you can set <command>pkg-config</command>-related\nenvironment variables (see its documentation).\"\n\n>\n>\n> > +\n> > + <varlistentry>\n> > +\n> <term><option>-Dspinlocks=<replaceable>true/false</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + This option is set to true by default; setting it to false will\n> > + allow the build to succeed even if\n> <productname>PostgreSQL</productname>\n> > + has no CPU spinlock support for the platform. The lack of\n> > + spinlock support will result in very poor performance;\n> therefore,\n> > + this option should only be changed if the build aborts and\n> > + informs you that the platform lacks spinlock support. If\n> setting this\n> > + option to false is required to build\n> <productname>PostgreSQL</productname> on\n> > + your platform, please report the problem to the\n> > + <productname>PostgreSQL</productname> developers.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > +\n> <term><option>-Datomics=<replaceable>true/false</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + This option is set to true by default; setting it to false will\n> > + disable use of CPU atomic operations. The option does nothing\n> on\n> > + platforms that lack such operations. On platforms that do have\n> > + them, disabling atomics will result in poor performance.\n> Changing\n> > + this option is only useful for debugging or making performance\n> comparisons.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > + </variablelist>\n>\n> I think these should rather be in the developer section? They're not\n> dependencies and as you noted, they're not normally useful.\n>\n\nMakes sense. Moved.\n\n>\n>\n> > + <varlistentry>\n> > +\n> <term><option>-Dextra_include_dirs=<replaceable>DIRECTORIES</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + <replaceable>DIRECTORIES</replaceable> is a colon-separated\n> list of\n> > + directories that will be added to the list the compiler\n> > + searches for header files. If you have optional packages\n> > + (such as GNU <application>Readline</application>) installed in\n> a non-standard\n> > + location,\n> > + you have to use this option and probably also the corresponding\n> > + <option>-Dextra_lib_dirs</option> option.\n> > + </para>\n> > + <para>\n> > + Example:\n> <literal>-Dextra_include_dirs=/opt/gnu/include:/usr/sup/include</literal>.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> The separator for meson is a comma, rather than a :\n>\n\nChanged.\n\n>\n> > + <varlistentry>\n> > +\n> <term><option>-Dextra_lib_dirs=<replaceable>DIRECTORIES</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + <replaceable>DIRECTORIES</replaceable> is a colon-separated\n> list of\n> > + directories to search for libraries. You will probably have\n> > + to use this option (and the corresponding\n> > + <option>-Dextra_include_dirs</option> option) if you have\n> packages\n> > + installed in non-standard locations.\n> > + </para>\n> > + <para>\n> > + Example:\n> <literal>-Dextra_lib_dirs=/opt/gnu/lib:/usr/sup/lib</literal>.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Dito.\n>\n\nChanged.\n\n>\n>\n> > + <variablelist>\n> > +\n> > + <varlistentry>\n> > +\n> <term><option>-Dsegsize=<replaceable>SEGSIZE</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Set the <firstterm>segment size</firstterm>, in gigabytes.\n> Large tables are\n> > + divided into multiple operating-system files, each of size\n> equal\n> > + to the segment size. This avoids problems with file size\n> limits\n> > + that exist on many platforms. The default segment size, 1\n> gigabyte,\n> > + is safe on all supported platforms. If your operating system\n> has\n> > + <quote>largefile</quote> support (which most do, nowadays),\n> you can use\n> > + a larger segment size. This can be helpful to reduce the\n> number of\n> > + file descriptors consumed when working with very large tables.\n> > + But be careful not to select a value larger than is supported\n> > + by your platform and the file systems you intend to use. Other\n> > + tools you might wish to use, such as\n> <application>tar</application>, could\n> > + also set limits on the usable file size.\n> > + It is recommended, though not absolutely required, that this\n> value\n> > + be a power of 2.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > +\n> <term><option>-Dblocksize=<replaceable>BLOCKSIZE</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Set the <firstterm>block size</firstterm>, in kilobytes. This\n> is the unit\n> > + of storage and I/O within tables. The default, 8 kilobytes,\n> > + is suitable for most situations; but other values may be useful\n> > + in special cases.\n> > + The value must be a power of 2 between 1 and 32 (kilobytes).\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > +\n> <term><option>-Dwal_blocksize=<replaceable>BLOCKSIZE</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Set the <firstterm>WAL block size</firstterm>, in kilobytes.\n> This is the unit\n> > + of storage and I/O within the WAL log. The default, 8\n> kilobytes,\n> > + is suitable for most situations; but other values may be useful\n> > + in special cases.\n> > + The value must be a power of 2 between 1 and 64 (kilobytes).\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + </variablelist>\n>\n> The order of the list entries seems a bit random? Perhaps just go for\n> alphabetical?\n>\n\nDone\n\n>\n>\n> > + </sect3>\n> > +\n> > + <sect3 id=\"configure-devel\">\n> > + <title>Developer Options</title>\n> > +\n> > + <para>\n> > + Most of the options in this section are only of interest for\n> > + developing or debugging <productname>PostgreSQL</productname>.\n> > + They are not recommended for production builds, except\n> > + for <option>--debug</option>, which can be useful to enable\n> > + detailed bug reports in the unlucky event that you encounter a bug.\n> > + On platforms supporting DTrace, <option>-Ddtrace</option>\n> > + may also be reasonable to use in production.\n> > + </para>\n> > +\n> > + <para>\n> > + When building an installation that will be used to develop code\n> inside\n> > + the server, it is recommended to use at least the\n> <option>--buildtype=debug</option>\n> > + and <option>-Dcassert</option> options.\n> > + </para>\n> > +\n> > + <variablelist>\n> > + <varlistentry>\n> > +\n> <term><option>--buildtype=<replaceable>BUILDTYPE</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + This option can be used to specify the buildtype to use;\n> defaults\n> > + to release. If you'd like finer control on the debug symbols\n> > + and optimization levels than what this option provides, you can\n> > + refer to the --debug and --optimization flags.\n> > +\n> > + The following build types are generally used: plain, debug,\n> debugoptimized\n> > + and release. More information about them can be found in the\n> > + <ulink url=\"\n> https://mesonbuild.com/Running-Meson.html#configuring-the-build-directory\n> \">\n> > + meson documentation</ulink>.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > + <term><option>--debug</option></term>\n> > + <listitem>\n> > + <para>\n> > + Compiles all programs and libraries with debugging symbols.\n> > + This means that you can run the programs in a debugger\n> > + to analyze problems. This enlarges the size of the installed\n> > + executables considerably, and on non-GCC compilers it usually\n> > + also disables compiler optimization, causing slowdowns.\n> However,\n> > + having the symbols available is extremely helpful for dealing\n> > + with any problems that might arise. Currently, this option is\n> > + recommended for production installations only if you use GCC.\n> > + But you should always have it on if you are doing development\n> work\n> > + or running a beta version.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n> > +\n> > + <varlistentry>\n> > +\n> <term><option>--optimization</option>=<replaceable>LEVEL</replaceable></term>\n> > + <listitem>\n> > + <para>\n> > + Specify the optimization level. LEVEL can be set to any of\n> {0,g,1,2,3,s}.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Wonder if we should just document optimization and debug, rather than\n> buildtype. The fact that debug/optimization override buildtype is a bit\n> confusing.\n>\n\nYes, it was a bit confusing which is why I ended up documenting them as\nwell. Not sure about doing just the debug / optimization as buildtype is\nlikely a useful shorthand. I've kept as is for now but if you feel strongly\nabout documenting only one of the two, I can remove.\n\n>\n>\n> > + <varlistentry>\n> > + <term><option>-Dtap-tests</option></term>\n> > + <listitem>\n> > + <para>\n> > + Enable tests using the Perl TAP tools. This requires a Perl\n> > + installation and the Perl module <literal>IPC::Run</literal>.\n> > + <phrase condition=\"standalone-ignore\">See <xref\n> linkend=\"regress-tap\"/> for more information.</phrase>\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> This is an auto option as well.\n>\n\nFixed.\n\n>\n>\n> > + <varlistentry>\n> > + <term><option>--errorlogs</option></term>\n> > + <listitem>\n> > + <para>\n> > + This option can be used to print the logs from the failing tests\n> > + making debugging easier.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> I don't think it's worth documenting this, it defaults to true anyway.\n>\n\nMakes sense. Removed.\n\n>\n>\n> > + <para>\n> > + To point to the <command>dtrace</command> program, the\n> > + environment variable <envar>DTRACE</envar> can be set. This\n> > + will often be necessary because <command>dtrace</command> is\n> > + typically installed under <filename>/usr/sbin</filename>,\n> > + which might not be in your <envar>PATH</envar>.\n> > + </para>\n>\n> We don't read the DTRACE environment variable, but the DTRACE option.\n>\n\nGood catch. Changed.\n\n>\n>\n>\n> > + <para>\n> > + <literal>ninja install</literal> should work for most cases,\n> > + but if you'd like to use more options, you could also use\n> > + <literal>meson install</literal> instead. You can learn more about\n> > + <ulink url=\"https://mesonbuild.com/Commands.html#install\">meson\n> install</ulink>\n> > + and its options in the meson documentation.\n> > + </para>\n>\n> Maybe we should mention meson --quiet here? The verbosity of ninja install\n> is\n> a bit annoying.\n>\n\nDone\n\n\n>\n>\n> > +# Run the main pg_regress and isolation tests\n> > +<userinput>meson test --suite setup --suite main</userinput>\n>\n> Since yesterday the main suite is no more. There's 'regress' and\n> 'isolation'\n> now.\n>\n\nChanged\n\nRegards,\nSamay\n\n>\n>\n> Greetings,\n>\n> Andres Freund\n>",
"msg_date": "Tue, 8 Nov 2022 10:23:28 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nI did some tests on windows. I used 'ninja' as a backend.\n\nOn 11/8/2022 9:23 PM, samay sharma wrote:\n> Hi,\n>\n> On Sat, Nov 5, 2022 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-30 20:51:59 -0700, samay sharma wrote:\n> > +# setup and enter build directory (done only first time)\n> > +meson setup build src --prefix=$PWD/install\n>\n> This command won't work on windows, I think.\n>\n\nYes, $PWD isn't recognized on windows, %CD% could be alternative.\n\n\n> > + <varlistentry>\n> > +\n> <term><option>--sysconfdir=<replaceable>DIRECTORY</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Sets the directory for various configuration files,\n> > + <filename><replaceable>PREFIX</replaceable>/etc</filename> by\n> default.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Need to check what windows does here.\n>\n\nIt is same on windows: 'PREFIX/etc'.\n\nI also checked other dirs(bindir, sysconfdir, libdir, includedir, \ndatadir, localedir, mandir), default path is correct for all of them.\n\nRegards,\nNazir Bilal Yavuz\n\n\n\n\n\n\n\n\nHi,\n\n I did some tests on windows. I used 'ninja' as a backend.\n\nOn 11/8/2022 9:23 PM, samay sharma\n wrote:\n\n\n\n\nHi,\n\n\nOn Sat, Nov 5, 2022 at 2:39\n PM Andres Freund <andres@anarazel.de>\n wrote:\n\nHi,\n\n On 2022-10-30 20:51:59 -0700, samay sharma wrote:\n > +# setup and enter build directory (done only first\n time)\n > +meson setup build src --prefix=$PWD/install\n\n This command won't work on windows, I think.\n\n\n\n\n\n Yes, $PWD isn't recognized on windows, %CD% could be alternative.\n\n\n\n\n\n\n > + <varlistentry>\n > + \n <term><option>--sysconfdir=<replaceable>DIRECTORY</replaceable></option></term>\n > + <listitem>\n > + <para>\n > + Sets the directory for various configuration\n files,\n > + \n <filename><replaceable>PREFIX</replaceable>/etc</filename>\n by default.\n > + </para>\n > + </listitem>\n > + </varlistentry>\n\n Need to check what windows does here.\n\n\n\n\n\n It is same on windows: 'PREFIX/etc'.\n\n I also checked other dirs(bindir, sysconfdir, libdir, includedir,\n datadir, localedir, mandir), default path is correct for all of\n them.\n\n Regards,\n \n Nazir Bilal Yavuz",
"msg_date": "Thu, 10 Nov 2022 15:46:27 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Thu, Nov 10, 2022 at 4:46 AM Nazir Bilal Yavuz <byavuz81@gmail.com>\nwrote:\n\n> Hi,\n>\n> I did some tests on windows. I used 'ninja' as a backend.\n> On 11/8/2022 9:23 PM, samay sharma wrote:\n>\n> Hi,\n>\n> On Sat, Nov 5, 2022 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2022-10-30 20:51:59 -0700, samay sharma wrote:\n>> > +# setup and enter build directory (done only first time)\n>> > +meson setup build src --prefix=$PWD/install\n>>\n>> This command won't work on windows, I think.\n>>\n>\n> Yes, $PWD isn't recognized on windows, %CD% could be alternative.\n>\nAdded.\n\n>\n> > + <varlistentry>\n>> > +\n>> <term><option>--sysconfdir=<replaceable>DIRECTORY</replaceable></option></term>\n>> > + <listitem>\n>> > + <para>\n>> > + Sets the directory for various configuration files,\n>> > + <filename><replaceable>PREFIX</replaceable>/etc</filename> by\n>> default.\n>> > + </para>\n>> > + </listitem>\n>> > + </varlistentry>\n>>\n>> Need to check what windows does here.\n>>\n>\n> It is same on windows: 'PREFIX/etc'.\n>\n> I also checked other dirs(bindir, sysconfdir, libdir, includedir, datadir,\n> localedir, mandir), default path is correct for all of them.\n>\n\nThanks. I've made a few windows specific fixes in the latest version.\nAttached v5.\n\nRegards,\nSamay\n\n\n>\n> Regards,\n> Nazir Bilal Yavuz\n>\n>\n>\n>",
"msg_date": "Mon, 14 Nov 2022 10:41:21 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "2022年10月20日(木) 11:43 Justin Pryzby <pryzby@telsasoft.com>:\n>\n> On Wed, Oct 19, 2022 at 11:35:10AM -0700, samay sharma wrote:\n> > Creating a new thread focussed on adding docs for building Postgres with\n> > meson. This is a spinoff from the original thread [1] and I've attempted to\n> > address all the feedback provided there in the attached patch.\n> >\n> > Please let me know your thoughts.\n>\n> It's easier to review rendered documentation.\n> I made a rendered copy available here:\n> https://api.cirrus-ci.com/v1/artifact/task/6084297781149696/html_docs/html_docs/install-meson.html\n\nFor reference, are there any instructions anywhere on how to do this? It'd be\nuseful to be able to provide a link to the latest version of this documentation\n(and also document the process for patch authors in general).\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 16 Nov 2022 10:52:35 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 10:52:35AM +0900, Ian Lawrence Barwick wrote:\n> 2022年10月20日(木) 11:43 Justin Pryzby <pryzby@telsasoft.com>:\n> >\n> > On Wed, Oct 19, 2022 at 11:35:10AM -0700, samay sharma wrote:\n> > > Creating a new thread focussed on adding docs for building Postgres with\n> > > meson. This is a spinoff from the original thread [1] and I've attempted to\n> > > address all the feedback provided there in the attached patch.\n> > >\n> > > Please let me know your thoughts.\n> >\n> > It's easier to review rendered documentation.\n> > I made a rendered copy available here:\n> > https://api.cirrus-ci.com/v1/artifact/task/6084297781149696/html_docs/html_docs/install-meson.html\n> \n> For reference, are there any instructions anywhere on how to do this? It'd be\n> useful to be able to provide a link to the latest version of this documentation\n> (and also document the process for patch authors in general).\n\nI've submitted patches which would do that for every patch (ideally,\nexcluding patches that don't touch the docs, although it looks like the\n\"exclusion\" isn't working).\nhttps://commitfest.postgresql.org/40/3709/\n\nThe most recent patches on that thread don't include the \"docs as\nartifacts\" patch, but only the preparatory \"build docs as a separate\ntask\". I think the other part is stalled waiting for some updates to\ncfbot to allow knowing how many commits are in the patchset.\n\nFYI, you can navigate from the cirrus task's URL to the git commit (and\nits parents)\n\nhttps://cirrus-ci.com/task/6084297781149696 =>\nhttps://github.com/justinpryzby/postgres/commit/7b57f3323fc77e9b04ef2e76976776090eb8b5b5\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 15 Nov 2022 20:22:19 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 10:41:21AM -0800, samay sharma wrote:\n\n> You need LZ4, if you want to support compression of data with that\n> method; see default_toast_compression and wal_compression. \n\n=> The first comma is odd. Maybe it should say \"LZ4 is needed to\nsupport ..\"\n\n> You need Zstandard, if you want to support compression of data or\n> backups with that method; see wal_compression. The minimum required\n> version is 1.4.0. \n\nSame.\n\nAlso, since v15, LZ4 and zstd can both be used by basebackup.\n\n>Some commonly used ones are mentioned in the subsequent sections\n\n=> Some commonly used options ...\n\n> Most of these require additional software, as described in Section\n> 17.3.2, and are set to be auto features.\n\n=> \"Are set to be auto features\" sounds odd. I think it should say\nsomething like \" .. and are automatically enabled if the required\nsoftware is detected.\".\n\n> You can change this behavior by manually setting the auto features to\n> enabled to require them or disabled to not build with them. \n\nremove \"auto\". Maybe \"enabled\" and \"disabled\" need markup.\n\n> On Windows, the default WinLDAP library is used. It defults to auto\n\ntypo: defults\n\n> It defaults to auto and libsystemd and the associated header files need\n> to be installed to use this option.\n\n=> write this as two separate sentences. Same for libxml.\n\n> bsd to use the UUID functions found in FreeBSD, NetBSD, and some other\n> BSD-derived systems \n\n=> should remove mention of netbsd, like c4b6d218e\n\n> Enables use of the Zlib library. It defaults to auto and enables\n> support for compressed archives in pg_dump ,pg_restore and\n> pg_basebackup and is recommended. \n\n=> The comma is mis-placed.\n\n> The default backend meson uses is ninja and that should suffice for\n> most use cases. However, if you'd like to fully integrate with\n> visual studio, you can set the BACKEND to vs. \n\n=> BACKEND is missing markup.\n\n> This option can be used to specify the buildtype to use; defaults to\n> release\n\n=> release is missing markup\n\n> Specify the optimization level. LEVEL can be set to any of\n> {0,g,1,2,3,s}. \n\n=> LEVEL is missing markup\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Wed, 23 Nov 2022 00:36:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Tue, Nov 22, 2022 at 10:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Nov 14, 2022 at 10:41:21AM -0800, samay sharma wrote:\n>\n> > You need LZ4, if you want to support compression of data with that\n> > method; see default_toast_compression and wal_compression.\n>\n> => The first comma is odd. Maybe it should say \"LZ4 is needed to\n> support ..\"\n>\n> > You need Zstandard, if you want to support compression of data or\n> > backups with that method; see wal_compression. The minimum required\n> > version is 1.4.0.\n>\n> Same.\n>\n> Also, since v15, LZ4 and zstd can both be used by basebackup.\n>\n> >Some commonly used ones are mentioned in the subsequent sections\n>\n> => Some commonly used options ...\n>\n> > Most of these require additional software, as described in Section\n> > 17.3.2, and are set to be auto features.\n>\n> => \"Are set to be auto features\" sounds odd. I think it should say\n> something like \" .. and are automatically enabled if the required\n> software is detected.\".\n>\n> > You can change this behavior by manually setting the auto features to\n> > enabled to require them or disabled to not build with them.\n>\n> remove \"auto\". Maybe \"enabled\" and \"disabled\" need markup.\n>\n> > On Windows, the default WinLDAP library is used. It defults to auto\n>\n> typo: defults\n>\n> > It defaults to auto and libsystemd and the associated header files need\n> > to be installed to use this option.\n>\n> => write this as two separate sentences. Same for libxml.\n>\n> > bsd to use the UUID functions found in FreeBSD, NetBSD, and some other\n> > BSD-derived systems\n>\n> => should remove mention of netbsd, like c4b6d218e\n>\n> > Enables use of the Zlib library. It defaults to auto and enables\n> > support for compressed archives in pg_dump ,pg_restore and\n> > pg_basebackup and is recommended.\n>\n> => The comma is mis-placed.\n>\n> > The default backend meson uses is ninja and that should suffice for\n> > most use cases. However, if you'd like to fully integrate with\n> > visual studio, you can set the BACKEND to vs.\n>\n> => BACKEND is missing markup.\n>\n> > This option can be used to specify the buildtype to use; defaults to\n> > release\n>\n> => release is missing markup\n>\n> > Specify the optimization level. LEVEL can be set to any of\n> > {0,g,1,2,3,s}.\n>\n> => LEVEL is missing markup\n>\n\nThanks for the feedback. Addressed all and added markup at a few more\nplaces in v6 (attached).\n\nRegards,\nSamay\n\n>\n> Thanks,\n> --\n> Justin\n>",
"msg_date": "Wed, 23 Nov 2022 11:30:54 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 11:30:54AM -0800, samay sharma wrote:\n> Thanks for the feedback. Addressed all and added markup at a few more\n> places in v6 (attached).\n\nThanks. It looks good to me. A couple thoughts, maybe they're not\nimportant.\n\n - LZ4 and Zstd refer to wal_compression and default_toast_compression,\n but not to any corresponding option to basebackup.\n\n - There's no space after the hash mark here; but above, there was:\n #Setup build directory with a different installation prefix\n\n - You use slash to show enumerated options, but it's more typical to\n use braces: {a | b | c}:\n -Dnls=auto/enabled/disabled\n\n - There's no earlier description/definition of an \"auto\" feature, but\n still says this:\n \"Setting this option allows you to override value of all 'auto' features\"\n\n - Currently the documentation always refers to \"PostgreSQL\", but you\n added two references to \"Postgres\":\n + If a program required to build Postgres...\n + Once Postgres is built...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 23 Nov 2022 14:16:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Wed, Nov 23, 2022 at 12:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Nov 23, 2022 at 11:30:54AM -0800, samay sharma wrote:\n> > Thanks for the feedback. Addressed all and added markup at a few more\n> > places in v6 (attached).\n>\n> Thanks. It looks good to me. A couple thoughts, maybe they're not\n> important.\n>\n\nThank you. Attaching v7 addressing most of the points below.\n\n\n>\n> - LZ4 and Zstd refer to wal_compression and default_toast_compression,\n> but not to any corresponding option to basebackup.\n>\n> - There's no space after the hash mark here; but above, there was:\n> #Setup build directory with a different installation prefix\n>\n\nAdded a space as that looks better.\n\n>\n> - You use slash to show enumerated options, but it's more typical to\n> use braces: {a | b | c}:\n> -Dnls=auto/enabled/disabled\n>\n\nChanged.\n\n>\n> - There's no earlier description/definition of an \"auto\" feature, but\n> still says this:\n> \"Setting this option allows you to override value of all 'auto'\n> features\"\n>\n\nDescribed what an \"auto\" feature is in ().\n\n>\n> - Currently the documentation always refers to \"PostgreSQL\", but you\n> added two references to \"Postgres\":\n> + If a program required to build Postgres...\n> + Once Postgres is built...\n>\n\nGood catch. Changed to PostgreSQL.\n\nRegards,\nSamay\n\n>\n> --\n> Justin\n>",
"msg_date": "Wed, 23 Nov 2022 13:24:51 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Thanks; two more things I saw:\n\n - In other docs, <replacable> isn't used around { a | b } lists:\n git grep '<replaceable>[^<]*|' doc\n - I think this is(was) missing a word;\n Setting this option allows you to override THE value of all 'auto' features\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 27 Nov 2022 08:47:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On 23.11.22 22:24, samay sharma wrote:\n> Thank you. Attaching v7 addressing most of the points below.\n\nI have committed this, after some editing and making some structural \nchanges. I moved the \"Requirements\" section back to the top level. It \ndid not look appealing to have to maintain two copies of this that have \nalmost no substantial difference (but for some reason were written with \nseparate structure and wording). Also, I rearranged the Building with \nMeson section to use the same internal structure as the Building with \nAutoconf and Make section. This will make it easier to maintain going \nforward. For example if someone adds a new option, it will be easier to \nfind the corresponding places in the lists where to add them.\n\nWe will likely keep iterating on the contents for the next little while, \nbut I'm glad we now have a structure in place that we should be able to \nlive with.\n\n\n\n",
"msg_date": "Thu, 1 Dec 2022 15:58:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-01 15:58:39 +0100, Peter Eisentraut wrote:\n> On 23.11.22 22:24, samay sharma wrote:\n> > Thank you. Attaching v7 addressing most of the points below.\n> \n> I have committed this, after some editing and making some structural\n> changes.\n\nThanks. I was working on that too, but somehow felt a bit stuck...\n\nI'll try if I can adapt my pending changes.\n\n\n> I moved the \"Requirements\" section back to the top level. It did\n> not look appealing to have to maintain two copies of this that have almost\n> no substantial difference (but for some reason were written with separate\n> structure and wording).\n\nI don't think this is good. The whole \"The following software packages are\nrequired for building PostgreSQL\" section is wrong now. \"They are not\nrequired in the default configuration, but they are needed when certain build\noptions are enabled, as explained below:\" section is misleading as well.\n\nBy the time we fix all of those we'll end up with a different section again.\n\n\n> Also, I rearranged the Building with Meson section to use the same internal\n> structure as the Building with Autoconf and Make section. This will make it\n> easier to maintain going forward. For example if someone adds a new option,\n> it will be easier to find the corresponding places in the lists where to add\n> them.\n\nI don't know. The existing list order makes very little sense to me. The\nE.g. --enable-nls is before the rest in configure, presumably because it sorts\nthere alphabetically. But that's not the case for meson.\n\nCopying \"Anti-Features\" as a structure element to the meson docs seems bogus\n(also the name is bogus, but that's a pre-existing issue). There's no\ndifference in -Dreadline= to the other options meson-options-features list.\n\nNor does -Dspinlocks= -Datomics= make sense in the \"anti features\" section. It\nmade some sense for autoconf because of the --without- prefix, but that's not\nat play in meson. Their placement in the \"Developer Options\" made a whole lot\nmore sense.\n\n\nI don't like \"Miscellaneous\" bit containing minor stuff like krb_srvnam and\ndata layout changing options like blocksize,segsize,wal_blocksize. But it\nmakes sense to change that for both at the same time.\n\n\n> We will likely keep iterating on the contents for the next little while, but\n> I'm glad we now have a structure in place that we should be able to live\n> with.\n\nI agree that it's good to have something we can work from more\niteratively. But I don't think this is a structure that we can live with.\n\n\nI'm not particularly happy about this level of structural change made without\ndiscussing it prior.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 09:21:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Thu, Dec 1, 2022 at 9:21 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-12-01 15:58:39 +0100, Peter Eisentraut wrote:\n> > On 23.11.22 22:24, samay sharma wrote:\n> > > Thank you. Attaching v7 addressing most of the points below.\n> >\n> > I have committed this, after some editing and making some structural\n> > changes.\n>\n> Thanks. I was working on that too, but somehow felt a bit stuck...\n>\n> I'll try if I can adapt my pending changes.\n>\n\nI got back to working on the meson docs. I'm attaching a new patch set\nproposing some improvements to the current installation docs. I've tried to\nmake some corrections and improvements suggested in this thread while\ntrying to maintain consistency across make and meson docs as per Peter's\nask. There are 5 patches in the patch-set:\n\nv8-0001: Makes minor corrections, adds instructions to build docs and adds\na few links to meson docs.\nv8-0002, v8-0003 and v8-0004 make changes to restructure the configure\noptions based on reasoning discussed below. To maintain consistency, I've\nmade those changes on both the make and meson side.\nv8-0005 Reproposes the Short Version I had proposed in v7 as I feel we\nshould discuss that proposal. I think it improves things in terms of\ninstallation docs. More details below.\n\n\n>\n> > I moved the \"Requirements\" section back to the top level. It did\n> > not look appealing to have to maintain two copies of this that have\n> almost\n> > no substantial difference (but for some reason were written with separate\n> > structure and wording).\n>\n\nThere are a few reasons why I had done this. Some reasons Andres has\ndescribed in his previous email and I'll add a few specific examples on why\nhaving the same section for both might not be a good idea.\n\n* Having readline and zlib as required for building PostgreSQL is now wrong\nbecause they are not required for meson builds. Also, the name of the\nconfigs are different for make and meson and the current section only lists\nthe make ones.\n* There are many references to configure in that section which don't apply\nto meson.\n* Last I checked Flex and Bison were always required to build via meson but\nnot for make and the current section doesn't explain those differences.\n\nI spent a good amount of time thinking if we could have a single section,\nclarify these differences to make it correct and not confuse the users. I\ncouldn't find a way to do all three. Therefore, I think we should move to a\ndifferent requirements section for both. I'm happy to re-propose the\nprevious version which separates them but wanted to see if anybody has\nbetter ideas.\n\n\n>\n> I don't think this is good. The whole \"The following software packages are\n> required for building PostgreSQL\" section is wrong now. \"They are not\n> required in the default configuration, but they are needed when certain\n> build\n> options are enabled, as explained below:\" section is misleading as well.\n>\n> By the time we fix all of those we'll end up with a different section\n> again.\n>\n>\n> > Also, I rearranged the Building with Meson section to use the same\n> internal\n> > structure as the Building with Autoconf and Make section. This will\n> make it\n> > easier to maintain going forward. For example if someone adds a new\n> option,\n> > it will be easier to find the corresponding places in the lists where to\n> add\n> > them.\n\n\n> I don't know. The existing list order makes very little sense to me. The\n> E.g. --enable-nls is before the rest in configure, presumably because it\n> sorts\n> there alphabetically. But that's not the case for meson.\n>\n> Copying \"Anti-Features\" as a structure element to the meson docs seems\n> bogus\n> (also the name is bogus, but that's a pre-existing issue). There's no\n> difference in -Dreadline= to the other options meson-options-features list.\n\n\n> Nor does -Dspinlocks= -Datomics= make sense in the \"anti features\"\n> section. It\n> made some sense for autoconf because of the --without- prefix, but that's\n> not\n> at play in meson. Their placement in the \"Developer Options\" made a whole\n> lot\n> more sense.\n>\n\nI agree \"Anti-Features\" desn't make sense in the meson context. One of my\npatches removes that section and moves some options into the \"Postgres\nFeatures\" section and others into the \"Developer Options\" section. I've\nproposed to make those changes on both sides to make it easier to maintain.\n\n\n>\n> I don't like \"Miscellaneous\" bit containing minor stuff like krb_srvnam and\n> data layout changing options like blocksize,segsize,wal_blocksize. But it\n> makes sense to change that for both at the same time.\n>\n\nI've proposed a patch to add a new \"Data Layout Options\" section which\nincludes: blocksize, segsize and wal_blocksize. I've created that section\non both sides.\n\n>\n>\n> > We will likely keep iterating on the contents for the next little while,\n> but\n> > I'm glad we now have a structure in place that we should be able to live\n> > with.\n>\n\nI feel that there are a few shortcomings of the current \"Short Version\". I\ntried to address them in my previous proposal but I noticed that a version\nsimilar to the make version was committed. So, I thought I'd describe why I\nproposed a new structure.\n\n1) The current version has OS specific commands (eg. adduser). They don't\nwork across all platforms.\n2) The installation instructions use paths which require sudo and which are\nalso OS specific. Not every developer who wants to build and try out\nPostgres might have sudo permissions.\n3) Most developers have a separate installation path where they store their\ndev binaries while the current instructions install them in standard paths.\n4) I wanted to have a separate directory which can nicely be cleaned once\nyou're done with building and testing the packages. That's easier to do at\na local path.\n5) There's no description of what each instruction does so that developers\ncan modify the commands if they want to change something.\n\nDue to these reasons, I feel it's worth considering the newer version of\nthe \"Short version\". I've proposed to change it only in the meson docs for\nnow but if there's interest I can modify the make instructions to be the\nsame. I left them as it is as people might be used to those instructions.\n\nRegards,\nSamay\n\n>\n> I agree that it's good to have something we can work from more\n> iteratively. But I don't think this is a structure that we can live with.\n>\n>\n> I'm not particularly happy about this level of structural change made\n> without\n> discussing it prior.\n>\n> Greetings,\n>\n> Andres Freund\n>",
"msg_date": "Fri, 24 Feb 2023 21:40:58 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": " > [PATCH v8 1/5] Make minor additions and corrections to meson docs\n\nThe last hunk revealed that there is some mixing up between meson setup \nand meson configure. This goes a bit further. For example, earlier it \nsays that to get a list of meson setup options, call meson configure \n--help and look at https://mesonbuild.com/Commands.html#configure, which \nare both wrong. Also later throughout the text it uses one or the \nother. I think this has the potential to be very confusing, and we \nshould clean this up carefully.\n\nThe text about additional meson test options maybe should go into the \nregress.sgml chapter?\n\n\n > [PATCH v8 2/5] Add data layout options sub-section in installation\n docs\n\nThis makes sense. Please double check your patch for correct title \ncasing, vertical spacing of XML, to keep everything looking consistent.\n\nThis text isn't yours, but since your patch emphasizes it, I wonder if \nit could use some clarification:\n\n+ These options affect how PostgreSQL lays out data on disk.\n+ Note that changing these breaks on-disk database compatibility,\n+ meaning you cannot use <command>pg_upgrade</command> to upgrade to\n+ a build with different values of these options.\n\nThis isn't really correct. What breaking on-disk compatibility means is \nthat you can't use a server compiled one way with a data directory \ninitialized by binaries compiled another way. pg_upgrade may well have \nthe ability to upgrade between one or the other; that's up to pg_upgrade \nto figure out but not an intrinsic property. (I wonder why pg_upgrade \ncares about the WAL block size.)\n\n\n > [PATCH v8 3/5] Remove Anti-Features section from Installation from\n source docs\n\nMakes sense. But is \"--disable-thread-safety\" really a developer \nfeature? I think not.\n\n\n > [PATCH v8 4/5] Re-organize Miscellaneous section\n\nThis moves the Miscellaneous section after Developer Features. I think \nDeveloper Features should be last.\n\nMaybe should remove this section and add the options to the regular \nPostgreSQL Features section.\n\nAlso consider the grouping in meson_options.txt, which is slightly \ndifferent yet.\n\n\n > [PATCH v8 5/5] Change Short Version for meson installation guide\n\n+# create working directory\n+mkdir postgres\n+cd postgres\n+\n+# fetch source code\n+git clone https://git.postgresql.org/git/postgresql.git src\n\nThis comes after the \"Getting the Source\" section, so at this point they \nalready have the source and don't need to do \"git clone\" etc. again.\n\n+# setup and enter build directory (done only first time)\n+## Unix based platforms\n+meson setup build src --prefix=$PWD/install\n+\n+## Windows\n+meson setup build src --prefix=%cd%/install\n\nMaybe some people work this way, but to me the directory structures you \ncreate here are completely weird.\n\n+# Initialize a new database\n+../install/bin/initdb -D ../data\n+\n+# Start database\n+../install/bin/pg_ctl -D ../data/ -l logfile start\n+\n+# Connect to the database\n+../install/bin/psql -d postgres\n\nThe terminology here needs to be tightened up. You are using \"database\" \nhere to mean three different things.\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 12:28:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 15, 2023 at 4:28 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> > [PATCH v8 1/5] Make minor additions and corrections to meson docs\n>\n> The last hunk revealed that there is some mixing up between meson setup\n> and meson configure. This goes a bit further. For example, earlier it\n> says that to get a list of meson setup options, call meson configure\n> --help and look at https://mesonbuild.com/Commands.html#configure, which\n> are both wrong. Also later throughout the text it uses one or the\n> other. I think this has the potential to be very confusing, and we\n> should clean this up carefully.\n>\n> The text about additional meson test options maybe should go into the\n> regress.sgml chapter?\n>\n\nI tried to make the meson setup and meson configure usage consistent. I've\nremoved the text for the test options.\n\n>\n>\n> > [PATCH v8 2/5] Add data layout options sub-section in installation\n> docs\n>\n> This makes sense. Please double check your patch for correct title\n> casing, vertical spacing of XML, to keep everything looking consistent.\n>\n\nThanks for noticing. Made it consistent on both sides.\n\n>\n> This text isn't yours, but since your patch emphasizes it, I wonder if\n> it could use some clarification:\n>\n> + These options affect how PostgreSQL lays out data on disk.\n> + Note that changing these breaks on-disk database compatibility,\n> + meaning you cannot use <command>pg_upgrade</command> to upgrade to\n> + a build with different values of these options.\n>\n> This isn't really correct. What breaking on-disk compatibility means is\n> that you can't use a server compiled one way with a data directory\n> initialized by binaries compiled another way. pg_upgrade may well have\n> the ability to upgrade between one or the other; that's up to pg_upgrade\n> to figure out but not an intrinsic property. (I wonder why pg_upgrade\n> cares about the WAL block size.)\n>\n\n Fixed.\n\n>\n>\n> > [PATCH v8 3/5] Remove Anti-Features section from Installation from\n> source docs\n>\n> Makes sense. But is \"--disable-thread-safety\" really a developer\n> feature? I think not.\n>\n>\nMoved to PostgreSQL features. Do you think there's a better place for it?\n\n\n>\n> > [PATCH v8 4/5] Re-organize Miscellaneous section\n>\n> This moves the Miscellaneous section after Developer Features. I think\n> Developer Features should be last.\n>\n> Maybe should remove this section and add the options to the regular\n> PostgreSQL Features section.\n>\n\nYes, that makes sense. Made this change.\n\n>\n> Also consider the grouping in meson_options.txt, which is slightly\n> different yet.\n\n\nRemoved Misc options section from meson_options.txt too.\n\n>\n>\n> > [PATCH v8 5/5] Change Short Version for meson installation guide\n>\n> +# create working directory\n> +mkdir postgres\n> +cd postgres\n> +\n> +# fetch source code\n> +git clone https://git.postgresql.org/git/postgresql.git src\n>\n> This comes after the \"Getting the Source\" section, so at this point they\n> already have the source and don't need to do \"git clone\" etc. again.\n>\n> +# setup and enter build directory (done only first time)\n> +## Unix based platforms\n> +meson setup build src --prefix=$PWD/install\n> +\n> +## Windows\n> +meson setup build src --prefix=%cd%/install\n>\n> Maybe some people work this way, but to me the directory structures you\n> create here are completely weird.\n>\n\nI'd like to discuss what you think is a good directory structure to work\nwith. I've mentioned some of the drawbacks I see with the current structure\nfor the short version. I know this structure can feel different but it\nfeeling weird is not ideal. Do you have a directory structure in mind which\nis different but doesn't feel odd to you?\n\n\n>\n> +# Initialize a new database\n> +../install/bin/initdb -D ../data\n> +\n> +# Start database\n> +../install/bin/pg_ctl -D ../data/ -l logfile start\n> +\n> +# Connect to the database\n> +../install/bin/psql -d postgres\n>\n> The terminology here needs to be tightened up. You are using \"database\"\n> here to mean three different things.\n>\n\nI'll address this together once we are aligned on the overall directory\nstructure etc.\n\nThere are a few reasons why I had done this. Some reasons Andres has\n> described in his previous email and I'll add a few specific examples on why\n> having the same section for both might not be a good idea.\n>\n> * Having readline and zlib as required for building PostgreSQL is now\n> wrong because they are not required for meson builds. Also, the name of the\n> configs are different for make and meson and the current section only lists\n> the make ones.\n> * There are many references to configure in that section which don't\n> apply to meson.\n> * Last I checked Flex and Bison were always required to build via meson\n> but not for make and the current section doesn't explain those differences.\n>\n> I spent a good amount of time thinking if we could have a single section,\n> clarify these differences to make it correct and not confuse the users. I\n> couldn't find a way to do all three. Therefore, I think we should move to\n> a different requirements section for both. I'm happy to re-propose the\n> previous version which separates them but wanted to see if anybody has\n> better ideas.\n\n\nDo you have thoughts on the requirements section and the motivation to have\ntwo different versions I had mentioned upthread?\n\nRegards,\nSamay",
"msg_date": "Tue, 28 Mar 2023 12:27:26 -0700",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-28 12:27:26 -0700, samay sharma wrote:\n> Subject: [PATCH v9 1/5] Make minor additions and corrections to meson docs\n> \n> This commit makes a few corrections to the meson docs\n> and adds a few instructions and links for better clarity.\n> ---\n> doc/src/sgml/installation.sgml | 24 +++++++++++++++---------\n> 1 file changed, 15 insertions(+), 9 deletions(-)\n> \n> diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml\n> index 70ab5b77d8..e3b9b6c0d0 100644\n> --- a/doc/src/sgml/installation.sgml\n> +++ b/doc/src/sgml/installation.sgml\n> @@ -2057,8 +2057,7 @@ meson setup build -Dssl=openssl\n> <screen>\n> meson configure -Dcassert=true\n> </screen>\n> - <command>meson configure</command>'s commonly used command-line options\n> - are explained in <xref linkend=\"meson-options\"/>.\n> + Commonly used build options for <command>meson configure</command> (and <command>meson setup</command>) are explained in <xref linkend=\"meson-options\"/>.\n> </para>\n> </step>\n> \n> @@ -2078,6 +2077,13 @@ ninja\n> processes used with the command line argument <literal>-j</literal>.\n> </para>\n> \n> + <para>\n> + If you want to build the docs, you can type:\n> +<screen>\n> +ninja docs\n> +</screen>\n> + </para>\n\n\"type\" sounds a bit too, IDK, process oriented. \"To build the docs, use\"?\n\n\n> Subject: [PATCH v9 2/5] Add data layout options sub-section in installation\n> docs\n> \n> This commit separates out blocksize, segsize and wal_blocksize\n> options into a separate Data layout options sub-section in both\n> the make and meson docs. They were earlier in a miscellaneous\n> section which included several unrelated options. This change\n> also helps reduce the repetition of the warnings that changing\n> these parameters breaks on-disk compatibility.\n\nMakes sense. I'm planning to apply this unless Peter or somebody else has\nfurther feedback.\n\n\n> From 11d82aa49efb3d1cbc08f14562a757f115053c8b Mon Sep 17 00:00:00 2001\n> From: Samay Sharma <smilingsamay@gmail.com>\n> Date: Mon, 13 Feb 2023 16:23:52 -0800\n> Subject: [PATCH v9 3/5] Remove Anti-Features section from Installation from\n> source docs\n> \n> Currently, several meson setup options are listed in anti-features.\n> However, they are similar to most other options in the postgres\n> features list as they are 'auto' features themselves. Also, other\n> options are likely better suited to the developer options section.\n> This commit, therefore, moves the options listed in the anti-features\n> section into other sections and removes that section.\n> \n> For consistency, this reorganization has been done on the make section\n> of the docs as well.\n\nMakes sense to me. \"Anti-Features\" is confusing as a name to start with.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 10:18:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn Tue, Apr 11, 2023 at 10:18 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-03-28 12:27:26 -0700, samay sharma wrote:\n> > Subject: [PATCH v9 1/5] Make minor additions and corrections to meson\n> docs\n> >\n> > This commit makes a few corrections to the meson docs\n> > and adds a few instructions and links for better clarity.\n> > ---\n> > doc/src/sgml/installation.sgml | 24 +++++++++++++++---------\n> > 1 file changed, 15 insertions(+), 9 deletions(-)\n> >\n> > diff --git a/doc/src/sgml/installation.sgml\n> b/doc/src/sgml/installation.sgml\n> > index 70ab5b77d8..e3b9b6c0d0 100644\n> > --- a/doc/src/sgml/installation.sgml\n> > +++ b/doc/src/sgml/installation.sgml\n> > @@ -2057,8 +2057,7 @@ meson setup build -Dssl=openssl\n> > <screen>\n> > meson configure -Dcassert=true\n> > </screen>\n> > - <command>meson configure</command>'s commonly used command-line\n> options\n> > - are explained in <xref linkend=\"meson-options\"/>.\n> > + Commonly used build options for <command>meson configure</command>\n> (and <command>meson setup</command>) are explained in <xref\n> linkend=\"meson-options\"/>.\n> > </para>\n> > </step>\n> >\n> > @@ -2078,6 +2077,13 @@ ninja\n> > processes used with the command line argument <literal>-j</literal>.\n> > </para>\n> >\n> > + <para>\n> > + If you want to build the docs, you can type:\n> > +<screen>\n> > +ninja docs\n> > +</screen>\n> > + </para>\n>\n> \"type\" sounds a bit too, IDK, process oriented. \"To build the docs, use\"?\n>\n\nSure, that works.\n\n>\n>\n> > Subject: [PATCH v9 2/5] Add data layout options sub-section in\n> installation\n> > docs\n> >\n> > This commit separates out blocksize, segsize and wal_blocksize\n> > options into a separate Data layout options sub-section in both\n> > the make and meson docs. They were earlier in a miscellaneous\n> > section which included several unrelated options. This change\n> > also helps reduce the repetition of the warnings that changing\n> > these parameters breaks on-disk compatibility.\n>\n> Makes sense. I'm planning to apply this unless Peter or somebody else has\n> further feedback.\n>\n\nCool.\n\n>\n>\n> > From 11d82aa49efb3d1cbc08f14562a757f115053c8b Mon Sep 17 00:00:00 2001\n> > From: Samay Sharma <smilingsamay@gmail.com>\n> > Date: Mon, 13 Feb 2023 16:23:52 -0800\n> > Subject: [PATCH v9 3/5] Remove Anti-Features section from Installation\n> from\n> > source docs\n> >\n> > Currently, several meson setup options are listed in anti-features.\n> > However, they are similar to most other options in the postgres\n> > features list as they are 'auto' features themselves. Also, other\n> > options are likely better suited to the developer options section.\n> > This commit, therefore, moves the options listed in the anti-features\n> > section into other sections and removes that section.\n> >\n> > For consistency, this reorganization has been done on the make section\n> > of the docs as well.\n>\n> Makes sense to me. \"Anti-Features\" is confusing as a name to start with.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,On Tue, Apr 11, 2023 at 10:18 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-03-28 12:27:26 -0700, samay sharma wrote:\n> Subject: [PATCH v9 1/5] Make minor additions and corrections to meson docs\n> \n> This commit makes a few corrections to the meson docs\n> and adds a few instructions and links for better clarity.\n> ---\n> doc/src/sgml/installation.sgml | 24 +++++++++++++++---------\n> 1 file changed, 15 insertions(+), 9 deletions(-)\n> \n> diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml\n> index 70ab5b77d8..e3b9b6c0d0 100644\n> --- a/doc/src/sgml/installation.sgml\n> +++ b/doc/src/sgml/installation.sgml\n> @@ -2057,8 +2057,7 @@ meson setup build -Dssl=openssl\n> <screen>\n> meson configure -Dcassert=true\n> </screen>\n> - <command>meson configure</command>'s commonly used command-line options\n> - are explained in <xref linkend=\"meson-options\"/>.\n> + Commonly used build options for <command>meson configure</command> (and <command>meson setup</command>) are explained in <xref linkend=\"meson-options\"/>.\n> </para>\n> </step>\n> \n> @@ -2078,6 +2077,13 @@ ninja\n> processes used with the command line argument <literal>-j</literal>.\n> </para>\n> \n> + <para>\n> + If you want to build the docs, you can type:\n> +<screen>\n> +ninja docs\n> +</screen>\n> + </para>\n\n\"type\" sounds a bit too, IDK, process oriented. \"To build the docs, use\"?Sure, that works. \n\n\n> Subject: [PATCH v9 2/5] Add data layout options sub-section in installation\n> docs\n> \n> This commit separates out blocksize, segsize and wal_blocksize\n> options into a separate Data layout options sub-section in both\n> the make and meson docs. They were earlier in a miscellaneous\n> section which included several unrelated options. This change\n> also helps reduce the repetition of the warnings that changing\n> these parameters breaks on-disk compatibility.\n\nMakes sense. I'm planning to apply this unless Peter or somebody else has\nfurther feedback.Cool. \n\n\n> From 11d82aa49efb3d1cbc08f14562a757f115053c8b Mon Sep 17 00:00:00 2001\n> From: Samay Sharma <smilingsamay@gmail.com>\n> Date: Mon, 13 Feb 2023 16:23:52 -0800\n> Subject: [PATCH v9 3/5] Remove Anti-Features section from Installation from\n> source docs\n> \n> Currently, several meson setup options are listed in anti-features.\n> However, they are similar to most other options in the postgres\n> features list as they are 'auto' features themselves. Also, other\n> options are likely better suited to the developer options section.\n> This commit, therefore, moves the options listed in the anti-features\n> section into other sections and removes that section.\n> \n> For consistency, this reorganization has been done on the make section\n> of the docs as well.\n\nMakes sense to me. \"Anti-Features\" is confusing as a name to start with.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 11 Apr 2023 14:41:38 -0700",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On 11.04.23 19:18, Andres Freund wrote:\n>> Subject: [PATCH v9 2/5] Add data layout options sub-section in installation\n>> docs\n>>\n>> This commit separates out blocksize, segsize and wal_blocksize\n>> options into a separate Data layout options sub-section in both\n>> the make and meson docs. They were earlier in a miscellaneous\n>> section which included several unrelated options. This change\n>> also helps reduce the repetition of the warnings that changing\n>> these parameters breaks on-disk compatibility.\n> \n> Makes sense. I'm planning to apply this unless Peter or somebody else has\n> further feedback.\n\nI'm okay with patches 0001 through 0004.\n\nI don't like 0005. I think we should drop that for now and maybe have a \nseparate discussion under a separate heading about that.\n\n> \n> \n>> From 11d82aa49efb3d1cbc08f14562a757f115053c8b Mon Sep 17 00:00:00 2001\n>> From: Samay Sharma <smilingsamay@gmail.com>\n>> Date: Mon, 13 Feb 2023 16:23:52 -0800\n>> Subject: [PATCH v9 3/5] Remove Anti-Features section from Installation from\n>> source docs\n>>\n>> Currently, several meson setup options are listed in anti-features.\n>> However, they are similar to most other options in the postgres\n>> features list as they are 'auto' features themselves. Also, other\n>> options are likely better suited to the developer options section.\n>> This commit, therefore, moves the options listed in the anti-features\n>> section into other sections and removes that section.\n>>\n>> For consistency, this reorganization has been done on the make section\n>> of the docs as well.\n> \n> Makes sense to me. \"Anti-Features\" is confusing as a name to start with.\n\n\n\n",
"msg_date": "Wed, 12 Apr 2023 23:19:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hey!\n\nNice work organizing the docs. Looks pretty good. I had a single comment\nregarding the Meson docs in general.\n\nIt seems like Postgres cares a bit about Windows/Mac development. Is\nninja the blessed way to build Postgres on Windows and Mac? Otherwise, I\nwould suggest just being more generic and replace instances of `ninja\nxxx` in the docs with `meson compile xxx`, making the command\nbackend-agnostic to accomodate XCode and VS project users.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 30 May 2023 13:09:47 -0500",
"msg_from": "\"Tristan Partin\" <tristan@neon.tech>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Wed Apr 12, 2023 at 4:19 PM CDT, Peter Eisentraut wrote:\n> On 11.04.23 19:18, Andres Freund wrote:\n> >> Subject: [PATCH v9 2/5] Add data layout options sub-section in installation\n> >> docs\n> >>\n> >> This commit separates out blocksize, segsize and wal_blocksize\n> >> options into a separate Data layout options sub-section in both\n> >> the make and meson docs. They were earlier in a miscellaneous\n> >> section which included several unrelated options. This change\n> >> also helps reduce the repetition of the warnings that changing\n> >> these parameters breaks on-disk compatibility.\n> > \n> > Makes sense. I'm planning to apply this unless Peter or somebody else has\n> > further feedback.\n>\n> I'm okay with patches 0001 through 0004.\n>\n> I don't like 0005. I think we should drop that for now and maybe have a \n> separate discussion under a separate heading about that.\n\nWith regard to 0005, perhaps it makes the most sense to use the XDG\nDirectory Specification as an example. Install into ~/.local or\n~/.local/postgres (as the Meson default prefix is). Put the data into\n~/.var/lib/postgres or something similar.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 30 May 2023 13:13:52 -0500",
"msg_from": "\"Tristan Partin\" <tristan@neon.tech>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-28 12:27:26 -0700, samay sharma wrote:\n> + <para>\n> + If you want to build the docs, you can type:\n> +<screen>\n> +ninja docs\n> +</screen>\n> + </para>\n\nTo me 'you can type' sounds odd. To me even just \"To build the docs:\" would\nsound better. But the make docs do it \"your\" way, so I'll just go along with\nit for now.\n\n\n> From c5e637a54c2b83e5bd8c4155784d97e82937eb51 Mon Sep 17 00:00:00 2001\n> From: Samay Sharma <smilingsamay@gmail.com>\n> Date: Mon, 6 Feb 2023 16:09:42 -0800\n> Subject: [PATCH v9 2/5] Add data layout options sub-section in installation\n> docs\n>\n> This commit separates out blocksize, segsize and wal_blocksize\n> options into a separate Data layout options sub-section in both\n> the make and meson docs. They were earlier in a miscellaneous\n> section which included several unrelated options. This change\n> also helps reduce the repetition of the warnings that changing\n> these parameters breaks on-disk compatibility.\n\nI still like this change, but ISTM that the \"Data Layout\" section should\nfollow the \"PostgreSQL Features\" section, rather than follow \"Anti Features\",\n\"Build Process Details\" and \"Miscellaneous\". I realize some of these are\nreorganized later on, but even then \"Build Process Details\"\n\nWould anybody mind if I swapped these around?\n\n\n> + <varlistentry id=\"meson-option-with-blocksize\">\n\nI don't quite understand the \"-with\" added to the ids?\n\n\n> From 11d82aa49efb3d1cbc08f14562a757f115053c8b Mon Sep 17 00:00:00 2001\n> From: Samay Sharma <smilingsamay@gmail.com>\n> Date: Mon, 13 Feb 2023 16:23:52 -0800\n> Subject: [PATCH v9 3/5] Remove Anti-Features section from Installation from\n> source docs\n>\n> Currently, several meson setup options are listed in anti-features.\n> However, they are similar to most other options in the postgres\n> features list as they are 'auto' features themselves. Also, other\n> options are likely better suited to the developer options section.\n> This commit, therefore, moves the options listed in the anti-features\n> section into other sections and removes that section.\n> For consistency, this reorganization has been done on the make section\n> of the docs as well.\n> ---\n> doc/src/sgml/installation.sgml | 140 ++++++++++++++-------------------\n> 1 file changed, 57 insertions(+), 83 deletions(-)\n>\n> diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml\n> index 7e65cdd72e..d7ab0c205e 100644\n> --- a/doc/src/sgml/installation.sgml\n> +++ b/doc/src/sgml/installation.sgml\n> @@ -1214,23 +1214,6 @@ build-postgresql:\n> </listitem>\n> </varlistentry>\n>\n> - </variablelist>\n> -\n> - </sect3>\n> -\n> - <sect3 id=\"configure-options-anti-features\">\n> - <title>Anti-Features</title>\n> -\n> - <para>\n> - The options described in this section allow disabling\n> - certain <productname>PostgreSQL</productname> features that are built\n> - by default, but which might need to be turned off if the required\n> - software or system features are not available. Using these options is\n> - not recommended unless really necessary.\n> - </para>\n> -\n> - <variablelist>\n> -\n> <varlistentry id=\"configure-option-without-readline\">\n> <term><option>--without-readline</option></term>\n> <listitem>\n\nI don't think this is quite right. The section above the list says\n\n\"The options described in this section enable building of various PostgreSQL\nfeatures that are not built by default. Most of these are non-default only\nbecause they require additional software, as described in Section 17.1.\"\n\nSo just merging --without-icu, --without-readline, --without-zlib,\n--disable-thread-safety, in with the rest doesn't quite seem right.\n\nI suspect that the easiest way for that is to just move --disable-atomics,\n--disable-spinlocks to the developer section and then to leave the\nanti-features section around for autoconf.\n\nAny better idea?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Jun 2023 21:00:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On 10.06.23 06:00, Andres Freund wrote:\n>> From c5e637a54c2b83e5bd8c4155784d97e82937eb51 Mon Sep 17 00:00:00 2001\n>> From: Samay Sharma<smilingsamay@gmail.com>\n>> Date: Mon, 6 Feb 2023 16:09:42 -0800\n>> Subject: [PATCH v9 2/5] Add data layout options sub-section in installation\n>> docs\n>>\n>> This commit separates out blocksize, segsize and wal_blocksize\n>> options into a separate Data layout options sub-section in both\n>> the make and meson docs. They were earlier in a miscellaneous\n>> section which included several unrelated options. This change\n>> also helps reduce the repetition of the warnings that changing\n>> these parameters breaks on-disk compatibility.\n> I still like this change, but ISTM that the \"Data Layout\" section should\n> follow the \"PostgreSQL Features\" section, rather than follow \"Anti Features\",\n> \"Build Process Details\" and \"Miscellaneous\". I realize some of these are\n> reorganized later on, but even then \"Build Process Details\"\n> \n> Would anybody mind if I swapped these around?\n\nI don't mind a Data Layout section in principle, but I wonder whether \nit's worth changing now. The segsize option is proposed to be turned \ninto a run-time option (and/or removed). For the WAL block size, I had \npreviously mentioned, I don't think it is correct that pg_upgrade should \nactually care about it. So I wouldn't spend too much time trying to \ncarefully refactor the notes on the data layout options if we're going \nto have to change them around before long again anyway.\n\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 22:33:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
},
{
"msg_contents": "On Wed, 29 Mar 2023 at 00:57, samay sharma <smilingsamay@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Mar 15, 2023 at 4:28 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> > [PATCH v8 1/5] Make minor additions and corrections to meson docs\n>>\n>> The last hunk revealed that there is some mixing up between meson setup\n>> and meson configure. This goes a bit further. For example, earlier it\n>> says that to get a list of meson setup options, call meson configure\n>> --help and look at https://mesonbuild.com/Commands.html#configure, which\n>> are both wrong. Also later throughout the text it uses one or the\n>> other. I think this has the potential to be very confusing, and we\n>> should clean this up carefully.\n>>\n>> The text about additional meson test options maybe should go into the\n>> regress.sgml chapter?\n>\n>\n> I tried to make the meson setup and meson configure usage consistent. I've removed the text for the test options.\n>>\n>>\n>>\n>> > [PATCH v8 2/5] Add data layout options sub-section in installation\n>> docs\n>>\n>> This makes sense. Please double check your patch for correct title\n>> casing, vertical spacing of XML, to keep everything looking consistent.\n>\n>\n> Thanks for noticing. Made it consistent on both sides.\n>>\n>>\n>> This text isn't yours, but since your patch emphasizes it, I wonder if\n>> it could use some clarification:\n>>\n>> + These options affect how PostgreSQL lays out data on disk.\n>> + Note that changing these breaks on-disk database compatibility,\n>> + meaning you cannot use <command>pg_upgrade</command> to upgrade to\n>> + a build with different values of these options.\n>>\n>> This isn't really correct. What breaking on-disk compatibility means is\n>> that you can't use a server compiled one way with a data directory\n>> initialized by binaries compiled another way. pg_upgrade may well have\n>> the ability to upgrade between one or the other; that's up to pg_upgrade\n>> to figure out but not an intrinsic property. (I wonder why pg_upgrade\n>> cares about the WAL block size.)\n>\n>\n> Fixed.\n>>\n>>\n>>\n>> > [PATCH v8 3/5] Remove Anti-Features section from Installation from\n>> source docs\n>>\n>> Makes sense. But is \"--disable-thread-safety\" really a developer\n>> feature? I think not.\n>>\n>\n> Moved to PostgreSQL features. Do you think there's a better place for it?\n>\n>>\n>>\n>> > [PATCH v8 4/5] Re-organize Miscellaneous section\n>>\n>> This moves the Miscellaneous section after Developer Features. I think\n>> Developer Features should be last.\n>>\n>> Maybe should remove this section and add the options to the regular\n>> PostgreSQL Features section.\n>\n>\n> Yes, that makes sense. Made this change.\n>>\n>>\n>> Also consider the grouping in meson_options.txt, which is slightly\n>> different yet.\n>\n>\n> Removed Misc options section from meson_options.txt too.\n>>\n>>\n>>\n>> > [PATCH v8 5/5] Change Short Version for meson installation guide\n>>\n>> +# create working directory\n>> +mkdir postgres\n>> +cd postgres\n>> +\n>> +# fetch source code\n>> +git clone https://git.postgresql.org/git/postgresql.git src\n>>\n>> This comes after the \"Getting the Source\" section, so at this point they\n>> already have the source and don't need to do \"git clone\" etc. again.\n>>\n>> +# setup and enter build directory (done only first time)\n>> +## Unix based platforms\n>> +meson setup build src --prefix=$PWD/install\n>> +\n>> +## Windows\n>> +meson setup build src --prefix=%cd%/install\n>>\n>> Maybe some people work this way, but to me the directory structures you\n>> create here are completely weird.\n>\n>\n> I'd like to discuss what you think is a good directory structure to work with. I've mentioned some of the drawbacks I see with the current structure for the short version. I know this structure can feel different but it feeling weird is not ideal. Do you have a directory structure in mind which is different but doesn't feel odd to you?\n>\n>>\n>>\n>> +# Initialize a new database\n>> +../install/bin/initdb -D ../data\n>> +\n>> +# Start database\n>> +../install/bin/pg_ctl -D ../data/ -l logfile start\n>> +\n>> +# Connect to the database\n>> +../install/bin/psql -d postgres\n>>\n>> The terminology here needs to be tightened up. You are using \"database\"\n>> here to mean three different things.\n>\n>\n> I'll address this together once we are aligned on the overall directory structure etc.\n>\n>> There are a few reasons why I had done this. Some reasons Andres has described in his previous email and I'll add a few specific examples on why having the same section for both might not be a good idea.\n>>\n>> * Having readline and zlib as required for building PostgreSQL is now wrong because they are not required for meson builds. Also, the name of the configs are different for make and meson and the current section only lists the make ones.\n>> * There are many references to configure in that section which don't apply to meson.\n>> * Last I checked Flex and Bison were always required to build via meson but not for make and the current section doesn't explain those differences.\n>>\n>> I spent a good amount of time thinking if we could have a single section, clarify these differences to make it correct and not confuse the users. I couldn't find a way to do all three. Therefore, I think we should move to a different requirements section for both. I'm happy to re-propose the previous version which separates them but wanted to see if anybody has better ideas.\n>\n>\n> Do you have thoughts on the requirements section and the motivation to have two different versions I had mentioned upthread?\n\nI have changed the status of commitfest entry to \"Returned with\nFeedback\" as there was no followup on Tristan Partin and Andres's\ncomments from many months. Please handle the comments and add a new\ncommitfest entry if required for any pending tasks left.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 07:23:36 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation for building with meson"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing\nhttps://postgr.es/m/CAD21AoBe2o2D%3Dxyycsxw2bQOD%3DzPj7ETuJ5VYGN%3DdpoTiCMRJQ%40mail.gmail.com\nI noticed that pg_recvlogical prints\n\"pg_recvlogical: error: unexpected termination of replication stream: \"\n\nwhen signalled with SIGINT/TERM.\n\nOddly enough, that looks to have \"always\" been the case, even though clearly\nthe code tried to make provisions for a different outcome.\n\n\nIt looks to me like all that's needed is to gate the block printing the\nmessage with an !time_to_abort.\n\n\nI also then noticed that we don't fsync the output file in cases of errors -\nthat seems wrong to me? Looks to me like that block should be moved till after\nthe error:?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Oct 2022 14:39:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 3:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> While reviewing\n> https://postgr.es/m/CAD21AoBe2o2D%3Dxyycsxw2bQOD%3DzPj7ETuJ5VYGN%3DdpoTiCMRJQ%40mail.gmail.com\n> I noticed that pg_recvlogical prints\n> \"pg_recvlogical: error: unexpected termination of replication stream: \"\n>\n> when signalled with SIGINT/TERM.\n>\n> Oddly enough, that looks to have \"always\" been the case, even though clearly\n> the code tried to make provisions for a different outcome.\n>\n>\n> It looks to me like all that's needed is to gate the block printing the\n> message with an !time_to_abort.\n\n+1. How about emitting a message like its friend pg_receivewal, like\nthe attached patch?\n\n> I also then noticed that we don't fsync the output file in cases of errors -\n> that seems wrong to me? Looks to me like that block should be moved till after\n> the error:?\n\nHow about something like the attached patch?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 20 Oct 2022 13:28:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "At Thu, 20 Oct 2022 13:28:45 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Thu, Oct 20, 2022 at 3:10 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > While reviewing\n> > https://postgr.es/m/CAD21AoBe2o2D%3Dxyycsxw2bQOD%3DzPj7ETuJ5VYGN%3DdpoTiCMRJQ%40mail.gmail.com\n> > I noticed that pg_recvlogical prints\n> > \"pg_recvlogical: error: unexpected termination of replication stream: \"\n> >\n> > when signalled with SIGINT/TERM.\n> >\n> > Oddly enough, that looks to have \"always\" been the case, even though clearly\n> > the code tried to make provisions for a different outcome.\n> >\n> >\n> > It looks to me like all that's needed is to gate the block printing the\n> > message with an !time_to_abort.\n\n+1\n\n> +1. How about emitting a message like its friend pg_receivewal, like\n> the attached patch?\n\nI'm not a fan of treating SIGINT as an error in this case. It calls\nprepareToTerminate() when time_to_abort and everything goes fine after\nthen. So I think we should do the same thing after receiving an\ninterrupt. This also does file-sync naturally as a part of normal\nshutdown. I'm also not a fan of doing fsync at error.\n\n> > I also then noticed that we don't fsync the output file in cases of errors -\n> > that seems wrong to me? Looks to me like that block should be moved till after\n> > the error:?\n> \n> How about something like the attached patch?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 21 Oct 2022 11:21:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 7:52 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > +1. How about emitting a message like its friend pg_receivewal, like\n> > the attached patch?\n>\n> I'm not a fan of treating SIGINT as an error in this case. It calls\n> prepareToTerminate() when time_to_abort and everything goes fine after\n> then. So I think we should do the same thing after receiving an\n> interrupt. This also does file-sync naturally as a part of normal\n> shutdown. I'm also not a fan of doing fsync at error.\n\nI think the pg_recvlogical can gracefully exit on both SIGINT and\nSIGTERM to keep things simple.\n\n> > > I also then noticed that we don't fsync the output file in cases of errors -\n> > > that seems wrong to me? Looks to me like that block should be moved till after\n> > > the error:?\n> >\n> > How about something like the attached patch?\n\nThe attached patch (pg_recvlogical_graceful_interrupt.text) has a\ncouple of problems, I believe. We're losing prepareToTerminate() with\nkeepalive true and we're not skipping pg_log_error(\"unexpected\ntermination of replication stream: %s\" upon interrupt, after all we're\nhere discussing how to avoid it.\n\nI came up with the attached v2 patch, please have a look.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Oct 2022 08:15:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 8:15 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Oct 21, 2022 at 7:52 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > > +1. How about emitting a message like its friend pg_receivewal, like\n> > > the attached patch?\n> >\n> > I'm not a fan of treating SIGINT as an error in this case. It calls\n> > prepareToTerminate() when time_to_abort and everything goes fine after\n> > then. So I think we should do the same thing after receiving an\n> > interrupt. This also does file-sync naturally as a part of normal\n> > shutdown. I'm also not a fan of doing fsync at error.\n>\n> I think the pg_recvlogical can gracefully exit on both SIGINT and\n> SIGTERM to keep things simple.\n>\n> > > > I also then noticed that we don't fsync the output file in cases of errors -\n> > > > that seems wrong to me? Looks to me like that block should be moved till after\n> > > > the error:?\n> > >\n> > > How about something like the attached patch?\n>\n> The attached patch (pg_recvlogical_graceful_interrupt.text) has a\n> couple of problems, I believe. We're losing prepareToTerminate() with\n> keepalive true and we're not skipping pg_log_error(\"unexpected\n> termination of replication stream: %s\" upon interrupt, after all we're\n> here discussing how to avoid it.\n>\n> I came up with the attached v2 patch, please have a look.\n\nFWIW, I added it to CF - https://commitfest.postgresql.org/40/3966/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 27 Oct 2022 16:56:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-24 08:15:11 +0530, Bharath Rupireddy wrote:\n> I came up with the attached v2 patch, please have a look.\n\nThanks for working on this!\n\n\n> +\t\t/* When we get SIGINT/SIGTERM, we exit */\n> +\t\tif (ready_to_exit)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * Try informing the server about our exit, but don't wait around\n> +\t\t\t * or retry on failure.\n> +\t\t\t */\n> +\t\t\t(void) PQputCopyEnd(conn, NULL);\n> +\t\t\t(void) PQflush(conn);\n> +\t\t\ttime_to_abort = ready_to_exit;\n\nThis doesn't strike me as great - because the ready_to_exit isn't checked in\nthe loop around StreamLogicalLog(), we'll reconnect if something else causes\nStreamLogicalLog() to return.\n\nWhy do we need both time_to_abort and ready_to_exit? Perhaps worth noting that\ntime_to_abort is still an sig_atomic_t, but isn't modified in a signal\nhandler, which seems a bit unnecessarily confusing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 27 Oct 2022 16:11:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 4:41 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-10-24 08:15:11 +0530, Bharath Rupireddy wrote:\n>\n>\n> > + /* When we get SIGINT/SIGTERM, we exit */\n> > + if (ready_to_exit)\n> > + {\n> > + /*\n> > + * Try informing the server about our exit, but don't wait around\n> > + * or retry on failure.\n> > + */\n> > + (void) PQputCopyEnd(conn, NULL);\n> > + (void) PQflush(conn);\n> > + time_to_abort = ready_to_exit;\n>\n> This doesn't strike me as great - because the ready_to_exit isn't checked in\n> the loop around StreamLogicalLog(), we'll reconnect if something else causes\n> StreamLogicalLog() to return.\n\nFixed.\n\n> Why do we need both time_to_abort and ready_to_exit?\n\nIntention to have ready_to_exit is to be able to distinguish between\nSIGINT/SIGTERM and aborting when endpos is reached so that necessary\ncode is skipped/executed and proper logs are printed.\n\n> Perhaps worth noting that\n> time_to_abort is still an sig_atomic_t, but isn't modified in a signal\n> handler, which seems a bit unnecessarily confusing.\n\ntime_to_abort is just a static variable, no?\n\n+static bool time_to_abort = false;\n+static volatile sig_atomic_t ready_to_exit = false;\n\nPlease see the attached v3 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 28 Oct 2022 08:41:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHello\r\n\r\nThe patch applies and tests fine. I like the way to have both ready_to_exit and time_to_abort variables to control the exit sequence. I think the (void) cast can be removed in front of PQputCopyEnd(), PQflush for consistency purposes as it does not give warnings and everywhere else does not have those casts. \r\n\r\nthank you\r\nCary",
"msg_date": "Thu, 06 Apr 2023 23:41:43 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 08:15:11AM +0530, Bharath Rupireddy wrote:\n> The attached patch (pg_recvlogical_graceful_interrupt.text) has a\n> couple of problems, I believe. We're losing prepareToTerminate() with\n> keepalive true and we're not skipping pg_log_error(\"unexpected\n> termination of replication stream: %s\" upon interrupt, after all we're\n> here discussing how to avoid it.\n> \n> I came up with the attached v2 patch, please have a look.\n\nThis thread has slipped through the feature freeze deadline. Would\npeople be OK to do something now on HEAD? A backpatch is also in\norder, IMO, as the current behavior looks confusing under SIGINT and\nSIGTERM.\n--\nMichael",
"msg_date": "Tue, 11 Apr 2023 15:12:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 11:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 24, 2022 at 08:15:11AM +0530, Bharath Rupireddy wrote:\n> > The attached patch (pg_recvlogical_graceful_interrupt.text) has a\n> > couple of problems, I believe. We're losing prepareToTerminate() with\n> > keepalive true and we're not skipping pg_log_error(\"unexpected\n> > termination of replication stream: %s\" upon interrupt, after all we're\n> > here discussing how to avoid it.\n> >\n> > I came up with the attached v2 patch, please have a look.\n>\n> This thread has slipped through the feature freeze deadline. Would\n> people be OK to do something now on HEAD? A backpatch is also in\n> order, IMO, as the current behavior looks confusing under SIGINT and\n> SIGTERM.\n\nIMO, +1 for HEAD/PG16 and +0.5 for backpatching as it may not be so\ncritical to backpatch all the way down. What may happen without this\npatch is that the output file isn't fsync-ed upon SIGINT/SIGTERM.\nWell, is it a critical issue on production servers?\n\nOn Fri, Apr 7, 2023 at 5:12 AM Cary Huang <cary.huang@highgo.ca> wrote:\n>\n> The following review has been posted through the commitfest application:\n>\n> The patch applies and tests fine. I like the way to have both ready_to_exit and time_to_abort variables to control the exit sequence. I think the (void) cast can be removed in front of PQputCopyEnd(), PQflush for consistency purposes as it does not give warnings and everywhere else does not have those casts.\n\nThanks for reviewing. I removed the (void) casts like elsewhere in the\ncode, however, I didn't change such casts in prepareToTerminate() to\nnot create a diff.\n\nI'm attaching the v4 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 27 Apr 2023 11:24:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 11:24:52AM +0530, Bharath Rupireddy wrote:\n> IMO, +1 for HEAD/PG16 and +0.5 for backpatching as it may not be so\n> critical to backpatch all the way down. What may happen without this\n> patch is that the output file isn't fsync-ed upon SIGINT/SIGTERM.\n> Well, is it a critical issue on production servers?\n\nIt is also true that it's been this way for years with basically\nnobody complaining outside this thread. So there is also an argument\nabout leaving v16 out of the picture, and do that only in 17~ to avoid\nplaying with the branch stability more than necessary? I see 7 open\nitems as of today, and there are TAP tests linked to pg_recvlogical.\nThat should be OK, because none of these tests rely on specific\nsignals, but the buildfarm has some weird setups, as well..\n--\nMichael",
"msg_date": "Thu, 27 Apr 2023 15:22:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Thu Apr 27, 2023 at 12:54 AM CDT, Bharath Rupireddy wrote:\n> Thanks for reviewing. I removed the (void) casts like elsewhere in the\n> code, however, I didn't change such casts in prepareToTerminate() to\n> not create a diff.\n>\n> I'm attaching the v4 patch for further review.\n\nBharath,\n\nI signed up to review the patch for the commitfest. The patch looks\npretty good to me, but I would like to come to a conclusion on what\nAndres posted earlier in the thread.\n\n> Why do we need both time_to_abort and ready_to_exit?\n\nI am trying to understand why we need both as well. Maybe I am missing\nsomething important :).\n\n> /* It is not unexepected termination error when Ctrl-C'ed. */\n\nMy only other comment is that it would be nice to have the word \"an\"\nbefore unexpected.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 06 Jul 2023 10:29:10 -0500",
"msg_from": "\"Tristan Partin\" <tristan@neon.tech>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 10:29:10AM -0500, Tristan Partin wrote:\n> On Thu Apr 27, 2023 at 12:54 AM CDT, Bharath Rupireddy wrote:\n>> Why do we need both time_to_abort and ready_to_exit?\n> \n> I am trying to understand why we need both as well. Maybe I am missing\n> something important :).\n\nAs StreamLogicalLog() states once it leaves its main loop because\ntime_to_abort has been switched to true, we want a clean exit. I\nthink that this patch is just a more complicated way to avoid doing\ntwice the operations done by prepareToTerminate(). So how about\nmoving the prepareToTerminate() call outside the main streaming loop\nand call it when time_to_abort is true? Then, I would suggest to\nchange the keepalive argument of prepareToTerminate() to an enum able\nto handle three values to log the reason why the tool is stopping: the\nend of WAL, an interruption or a keepalive when logging. There are\ntwo of them now, but we want a third mode for the signals.\n\n> > /* It is not unexepected termination error when Ctrl-C'ed. */\n> \n> My only other comment is that it would be nice to have the word \"an\"\n> before unexpected.\n\ns/unexepected/unexpected/. Still, it seems to me that we don't need\nthis comment.\n--\nMichael",
"msg_date": "Mon, 10 Jul 2023 13:44:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 01:44:45PM +0900, Michael Paquier wrote:\n> As StreamLogicalLog() states once it leaves its main loop because\n> time_to_abort has been switched to true, we want a clean exit. I\n> think that this patch is just a more complicated way to avoid doing\n> twice the operations done by prepareToTerminate(). So how about\n> moving the prepareToTerminate() call outside the main streaming loop\n> and call it when time_to_abort is true? Then, I would suggest to\n> change the keepalive argument of prepareToTerminate() to an enum able\n> to handle three values to log the reason why the tool is stopping: the\n> end of WAL, an interruption or a keepalive when logging. There are\n> two of them now, but we want a third mode for the signals.\n\nIt took me some time to come back to this one, but attached is what I\nhad in mind. This stuff has three reasons to stop: keepalive, end LSN\nor signal. This makes the code easier to follow.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Wed, 19 Jul 2023 11:34:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 8:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> It took me some time to come back to this one, but attached is what I\n> had in mind. This stuff has three reasons to stop: keepalive, end LSN\n> or signal. This makes the code easier to follow.\n>\n> Thoughts or comments?\n\nThanks. I have some comments on v5:\n\n1. I don't think we need a stop_lsn variable, the cur_record_lsn can\nhelp if it's defined outside the loop. With this, the patch can\nfurther be simplified as attached v6.\n\n2. And, I'd prefer not to assume the stop_reason as signal by default.\nWhile it works for now because the remaining place where time_to_abort\nis set to true is only in the signal handler, it is not extensible, if\nthere's another exit condition comes in future that sets time_to_abort\nto true.\n\n3. pg_log_info(\"end position %X/%X reached on signal\", .... For\nsignal, end position is a bit vague wording and I think we can just\nsay pg_log_info(\"received interrupt signal, exiting\"); like\npg_receivewal. We really can't have a valid stop_lsn for signal exit\nbecause that depends on when signal arrives in the while loop. If at\nall, someone wants to know the last received LSN - they can look at\nthe other messages that pg_recvlogical emits - pg_recvlogical:\nconfirming write up to 0/2BFFFD0, flush to 0/2BFFFD0 (slot myslot).\n\n4. With v5, it was taking a while to exit after the first CTRL+C, see\nmultiple CTRL+Cs at the end:\nubuntu::~/postgres/inst/bin$ ./pg_recvlogical --slot=lsub1_repl_slot\n--file=lsub1.data --start --verbose\npg_recvlogical: starting log streaming at 0/0 (slot lsub1_repl_slot)\npg_recvlogical: streaming initiated\npg_recvlogical: confirming write up to 0/0, flush to 0/0 (slot lsub1_repl_slot)\npg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n(slot lsub1_repl_slot)\npg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n(slot lsub1_repl_slot)\npg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n(slot lsub1_repl_slot)\n^Cpg_recvlogical: end position 0/2867D70 reached on signal\n^C^C^C^C^C^C^C^C^C^C^C^C\n^C^C^C^C^C^C^C^C^C^C^C^C\n\n5. FWIW, on HEAD we'd get the following and the patch emits a better messaging:\nubuntu:~/postgres/inst/bin$ ./pg_recvlogical --slot=lsub1_repl_slot\n--file=lsub1.data --start --dbname='host=localhost port=5432\ndbname=postgres user=ubuntu' --verbose\npg_recvlogical: starting log streaming at 0/0 (slot lsub1_repl_slot)\npg_recvlogical: streaming initiated\npg_recvlogical: confirming write up to 0/0, flush to 0/0 (slot lsub1_repl_slot)\npg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n(slot lsub1_repl_slot)\npg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n(slot lsub1_repl_slot)\npg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n(slot lsub1_repl_slot)\n^Cpg_recvlogical: error: unexpected termination of replication stream:\n\nAttaching v6 patch with the above changes to v6. Thoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Jul 2023 12:07:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:07 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> 4. With v5, it was taking a while to exit after the first CTRL+C, see\n> multiple CTRL+Cs at the end:\n> ubuntu::~/postgres/inst/bin$ ./pg_recvlogical --slot=lsub1_repl_slot\n> --file=lsub1.data --start --verbose\n> pg_recvlogical: starting log streaming at 0/0 (slot lsub1_repl_slot)\n> pg_recvlogical: streaming initiated\n> pg_recvlogical: confirming write up to 0/0, flush to 0/0 (slot lsub1_repl_slot)\n> pg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n> (slot lsub1_repl_slot)\n> pg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n> (slot lsub1_repl_slot)\n> pg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n> (slot lsub1_repl_slot)\n> ^Cpg_recvlogical: end position 0/2867D70 reached on signal\n> ^C^C^C^C^C^C^C^C^C^C^C^C\n> ^C^C^C^C^C^C^C^C^C^C^C^C\n\nI think the delay is expected for the reason specified below and is\nnot because of any of the changes in v5. As far as CTRL+C is\nconcerned, it is a clean exit and hence we can't escape the while(1)\nloop.\n\n/*\n * We're doing a client-initiated clean exit and have sent CopyDone to\n * the server. Drain any messages, so we don't miss a last-minute\n * ErrorResponse. The walsender stops generating XLogData records once\n * it sees CopyDone, so expect this to finish quickly. After CopyDone,\n * it's too late for sendFeedback(), even if this were to take a long\n * time. Hence, use synchronous-mode PQgetCopyData().\n */\n while (1)\n {\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Jul 2023 13:33:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 12:07:21PM +0530, Bharath Rupireddy wrote:\n> 1. I don't think we need a stop_lsn variable, the cur_record_lsn can\n> help if it's defined outside the loop. With this, the patch can\n> further be simplified as attached v6.\n\nOkay by me.\n\n> 2. And, I'd prefer not to assume the stop_reason as signal by default.\n> While it works for now because the remaining place where time_to_abort\n> is set to true is only in the signal handler, it is not extensible, if\n> there's another exit condition comes in future that sets time_to_abort\n> to true.\n\nYeah, I was also wondering whether this should be set by the signal\nhandler in this case while storing the reason statically on a specific\nsig_atomic_t.\n\n> 3. pg_log_info(\"end position %X/%X reached on signal\", .... For\n> signal, end position is a bit vague wording and I think we can just\n> say pg_log_info(\"received interrupt signal, exiting\"); like\n> pg_receivewal. We really can't have a valid stop_lsn for signal exit\n> because that depends on when signal arrives in the while loop. If at\n> all, someone wants to know the last received LSN - they can look at\n> the other messages that pg_recvlogical emits - pg_recvlogical:\n> confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0 (slot myslot).\n\n+ case STREAM_STOP_SIGNAL:\n+ pg_log_info(\"received interrupt signal, exiting\");\n+ break;\n\nStill it is useful to report the location we have finished with when\nstopping on a signal, no? Why couldn't we use \"lsn\" here, aka\ncur_record_lsn?\n\n> 4. With v5, it was taking a while to exit after the first CTRL+C, see\n> multiple CTRL+Cs at the end:\n> ubuntu::~/postgres/inst/bin$ ./pg_recvlogical --slot=lsub1_repl_slot\n> --file=lsub1.data --start --verbose\n> pg_recvlogical: starting log streaming at 0/0 (slot lsub1_repl_slot)\n> pg_recvlogical: streaming initiated\n> pg_recvlogical: confirming write up to 0/0, flush to 0/0 (slot lsub1_repl_slot)\n> pg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n> (slot lsub1_repl_slot)\n> pg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n> (slot lsub1_repl_slot)\n> pg_recvlogical: confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0\n> (slot lsub1_repl_slot)\n> ^Cpg_recvlogical: end position 0/2867D70 reached on signal\n> ^C^C^C^C^C^C^C^C^C^C^C^C\n> ^C^C^C^C^C^C^C^C^C^C^C^C\n\nIsn't that where we'd need to look at a long change but we need to\nstop cleanly? That sounds expected to me.\n--\nMichael",
"msg_date": "Wed, 19 Jul 2023 17:11:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 1:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > 3. pg_log_info(\"end position %X/%X reached on signal\", .... For\n> > signal, end position is a bit vague wording and I think we can just\n> > say pg_log_info(\"received interrupt signal, exiting\"); like\n> > pg_receivewal. We really can't have a valid stop_lsn for signal exit\n> > because that depends on when signal arrives in the while loop. If at\n> > all, someone wants to know the last received LSN - they can look at\n> > the other messages that pg_recvlogical emits - pg_recvlogical:\n> > confirming write up to 0/2BFFFD0, flush to 0/2BFFFD0 (slot myslot).\n>\n> + case STREAM_STOP_SIGNAL:\n> + pg_log_info(\"received interrupt signal, exiting\");\n> + break;\n>\n> Still it is useful to report the location we have finished with when\n> stopping on a signal, no? Why couldn't we use \"lsn\" here, aka\n> cur_record_lsn?\n\nPrinting LSN on signal exit won't be correct - if signal is received\nbefore cur_record_lsn gets assigned, we will be showing an old LSN if\nit was previously assigned or invalid LSN if it wasn't assigned\npreviously. Signal arrival and processing are indeterministic, so we\ncan't always show the right info. Instead, we can just be simple in\nthe messaging without an lsn like pg_receivewal.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Jul 2023 13:46:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 01:33:15PM +0530, Bharath Rupireddy wrote:\n> I think the delay is expected for the reason specified below and is\n> not because of any of the changes in v5. As far as CTRL+C is\n> concerned, it is a clean exit and hence we can't escape the while(1)\n> loop.\n\nYes, that's also what I am expecting. That's costly when replaying a\nlarge change chunk, but we also want a clean exit on a signal as the\ncode comments document, so..\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 09:34:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 01:46:02PM +0530, Bharath Rupireddy wrote:\n> Printing LSN on signal exit won't be correct - if signal is received\n> before cur_record_lsn gets assigned, we will be showing an old LSN if\n> it was previously assigned or invalid LSN if it wasn't assigned\n> previously. Signal arrival and processing are indeterministic, so we\n> can't always show the right info.\n\nI think that there's an argument to be made because cur_record_lsn\nwill be set before coming back to the beginning of the replay loop\nwhen a stop is triggered by a signal.\n\n> Instead, we can just be simple in the messaging without an lsn like\n> pg_receivewal.\n\nAnyway, I'm OK with simple for now as it looks that you don't feel\nabout that either, and the patch is enough to fix the report of this\nthread. And one would get periodic information in --verbose mode\ndepending the sync message frequency, as well.\n\nSo, I have applied v6 after fixing two issues with it:\n- I have kept the reason as an argument of prepareToTerminate(), to be\nable to take advantage of the switch structure where compilers would\ngenerate a warning if adding a new value to StreamStopReason.\n- cur_record_lsn was not initialized at the beginning of\nStreamLogicalLog(), which is where the compiler complains about the\ncase of receiving a signal before entering in the replay loop, after\nestablising a connection.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 10:25:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_recvlogical prints bogus error when interrupted"
}
] |
[
{
"msg_contents": "I think that we should decouple the PROC_VACUUM_FOR_WRAPAROUND\nautocancellation behavior in ProcSleep() from antiwraparound\nautovacuum itself. In other words I think that it should be possible\nto cancel an autovacuum that happens to be an antiwraparound\nautovacuum, just as if were any other autovacuum -- because it usually\nis no different in any real practical sense. Or at least it shouldn't\nbe seen as fundamentally different to other autovacuums at first,\nbefore relfrozenxid attains an appreciably greater age (definition of\n\"appreciably greater\" is TBD).\n\nWhy should the PROC_VACUUM_FOR_WRAPAROUND behavior happen on *exactly*\nthe same timeline as the one used to launch an antiwraparound\nautovacuum, though? There is no inherent reason why we have to do both\nthings at exactly the same XID-age-wise time. But there is reason to\nthink that doing so could make matters worse rather than better [1].\n\nMore generally I think that it'll be useful to perform \"aggressive\nbehaviors\" on their own timeline, with no two distinct aggressive\nbehaviors applied at exactly the same time. In general we ought to\ngive a less aggressive approach some room to succeed before escalating\nto a more aggressive approach -- we should see if a less aggressive\napproach will work on its own. The failsafe is the most aggressive\nintervention of all. The PROC_VACUUM_FOR_WRAPAROUND behavior is almost\nas aggressive, and should happen sooner. Antiwraparound autovacuum\nitself (which is really a separate thing to\nPROC_VACUUM_FOR_WRAPAROUND) is less aggressive still. Then you have\nthings like the cutoffs in vacuumlazy.c that control things like\nfreezing.\n\nIn short, having an \"escalatory\" approach that applies each behavior\nat different times. The exact timelines we'd want are of course\ndebatable, but the value of having multiple distinct timelines (one\nper aggressive behavior) is far less debatable. We should give\nproblems a chance to \"resolve themselves\", at least up to a point.\n\nThe latest version of my in progress VACUUM patch series [2]\ncompletely removes the concept of aggressive VACUUM as a discrete mode\nof operation inside vacuumlazy.c. Every existing \"aggressive-ish\nbehavior\" will be retained in some form or other, but they'll be\napplied on separate timelines, in proportion to the problem at hand.\nFor example, we'll have a separate XID cutoff for waiting for a\ncleanup lock the hard way -- we will no longer use FreezeLimit for\nthat, since that doesn't give freezing a chance to happen in the next\nVACUUM. The same VACUUM operation that is the first one that is\ncapable of freezing should ideally not *also* be the first one that\nhas to wait for a cleanup lock. We should be willing to put off\nwaiting for a cleanup lock for much longer than we're willing to put\noff freezing. Reusing the same cutoff just makes life harder.\n\nClearly the idea of decoupling the PROC_VACUUM_FOR_WRAPAROUND behavior\nfrom antiwraparound autovacuum is conceptually related to my patch\nseries, but it can be treated as separate work. That's why I'm\nstarting another thread now.\n\nThere is another idea in that patch series that also seems worth\nmentioning as relevant (but not essential) to this discussion on this\nthread: it would be better if antiwraparound autovacuum was simply\nanother way to launch an autovacuum, which isn't fundamentally\ndifferent to any other. I believe that users will find this conceptual\nmodel a lot easier, especially in a world where antiwraparound\nautovacuums naturally became rare (which is the world that the big\npatch series seeks to bring about). It'll make antiwraparound\nautovacuum \"the threshold of last resort\", only needed when\nconventional tuple-based thresholds don't trigger at all for an\nextended period of time (e.g., for static tables).\n\nPerhaps it won't be trivial to fix autovacuum.c in the way I have in\nmind (which is to split PROC_VACUUM_FOR_WRAPAROUND into two flags that\nserve two separate purposes). I haven't considered if we're\naccidentally relying on the coupling to avoid confusion within\nautovacuum.c. That doesn't seem important right now, though.\n\n[1] https://www.tritondatacenter.com/blog/manta-postmortem-7-27-2015\n[2] https://postgr.es/m/CAH2-WzkU42GzrsHhL2BiC1QMhaVGmVdb5HR0_qczz0Gu2aSn=A@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Oct 2022 14:58:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Decoupling antiwraparound autovacuum from special rules around auto\n cancellation"
},
{
"msg_contents": "On Wed, 2022-10-19 at 14:58 -0700, Peter Geoghegan wrote:\n> Why should the PROC_VACUUM_FOR_WRAPAROUND behavior happen on\n> *exactly*\n> the same timeline as the one used to launch an antiwraparound\n> autovacuum, though?\n\nThe terminology is getting slightly confusing here: by\n\"antiwraparound\", you mean that it's not skipping unfrozen pages, and\ntherefore is able to advance relfrozenxid. Whereas the\nPROC_VACUUM_FOR_WRAPAROUND is the same thing, except done with greater\nurgency because wraparound is imminent. Right?\n\n> There is no inherent reason why we have to do both\n> things at exactly the same XID-age-wise time. But there is reason to\n> think that doing so could make matters worse rather than better [1].\n\nCan you explain?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 20 Oct 2022 11:09:00 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 11:09 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> The terminology is getting slightly confusing here: by\n> \"antiwraparound\", you mean that it's not skipping unfrozen pages, and\n> therefore is able to advance relfrozenxid. Whereas the\n> PROC_VACUUM_FOR_WRAPAROUND is the same thing, except done with greater\n> urgency because wraparound is imminent. Right?\n\nNot really.\n\nI started this thread to discuss a behavior in autovacuum.c and proc.c\n(the autocancellation behavior), which is, strictly speaking, not\nrelated to the current vacuumlazy.c behavior we call aggressive mode\nVACUUM. Various hackers have in the past described antiwraparound\nautovacuum as \"implying aggressive\", which makes sense; what's the\npoint in doing an antiwraparound autovacuum that can almost never\nadvance relfrozenxid?\n\nIt is nevertheless true that antiwraparound autovacuum is an\nindependent behavior to aggressive VACUUM. The former is an autovacuum\nthing, and the latter is a VACUUM thing. That's just how it works,\nmechanically.\n\nIf this division seems artificial or pedantic to you, then consider\nthe fact that you can quite easily get a non-aggressive antiwraparound\nautovacuum by using the storage option called\nautovacuum_freeze_max_age (instead of the GUC):\n\nhttps://postgr.es/m/CAH2-Wz=DJAokY_GhKJchgpa8k9t_H_OVOvfPEn97jGNr9W=deg@mail.gmail.com\n\nThis is even a case where we'll output a distinct description in the\nserver log when autovacuum logging is enabled and gets triggered. So\nwhile there may be no point in an antiwraparound autovacuum that is\nnon-aggressive, that doesn't stop them from happening. Regardless of\nwhether or not that's an intended behavior, that's just how the\nmechanism has been constructed.\n\n> > There is no inherent reason why we have to do both\n> > things at exactly the same XID-age-wise time. But there is reason to\n> > think that doing so could make matters worse rather than better [1].\n>\n> Can you explain?\n\nWhy should the special autocancellation behavior for antiwraparound\nautovacuums kick in at exactly the same point that we first launch an\nantiwraparound autovacuum? Maybe that aggressive intervention will be\nneeded, in the end, but why start there?\n\nWith my patch series, antiwraparound autovacuums still occur, but\nthey're confined to things like static tables -- things that are\npretty much edge cases. They still need to behave sensibly (i.e.\nreliably advance relfrozenxid based on some principled approach), but\nnow they're more like \"an autovacuum that happens because no other\ncondition triggered an autovacuum\". To some degree this is already the\ncase, but I'd like to be more deliberate about it.\n\nLeaving my patch series aside, I still don't think that it makes sense\nto make it impossible to auto-cancel antiwraparound autovacuums,\nacross the board, regardless of the underlying table age. We still\nneed something like that, but why not give a still-cancellable\nautovacuum worker a chance to resolve the problem? Why take a risk of\ncausing much bigger problems (e.g., blocking automated DDL that blocks\nsimple SELECT queries) before the point that that starts to look like\nthe lesser risk (compared to hitting xidStopLimit)?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 20 Oct 2022 11:52:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 11:52 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Leaving my patch series aside, I still don't think that it makes sense\n> to make it impossible to auto-cancel antiwraparound autovacuums,\n> across the board, regardless of the underlying table age.\n\nOne small thought on the presentation/docs side of this: maybe it\nwould be better to invent a new kind of autovacuum that has the same\npurpose as antiwraparound autovacuum, but goes by a different name,\nand doesn't have the special behavior around cancellations. We\nwouldn't have to change anything about the behavior of antiwraparound\nautovacuum once we reached the point of needing one.\n\nMaybe we wouldn't even need to invent a new user-visible name for this\nother kind of autovacuum. While even this so-called \"new kind of\nautovacuum\" will be rare once my main patch series gets in, it'll\nstill be a totally normal occurrence. Whereas antiwraparound\nautovacuums are sometimes described as an emergency mechanism.\n\nThat way we wouldn't be fighting against the widely held perception\nthat antiwraparound autovacuums are scary. In fact that reputation\nwould be fully deserved, for the first time. There are lots of\nproblems with the idea that antiwraparound autovacuum is kind of an\nemergency thing right now, but this would make things fit the\nperception, \"fixing\" the perception. Antiwraparound autovacuums would\nbecome far far rarer under this scheme, but when they did happen\nthey'd be clear cause for concern. A useful signal for users, who\nshould ideally aim to never see *any* antiwraparound autovacuums.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 21 Oct 2022 17:39:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, 2022-10-21 at 17:39 -0700, Peter Geoghegan wrote:\n> One small thought on the presentation/docs side of this: maybe it\n> would be better to invent a new kind of autovacuum\n\nIt's possible this would be easier for users to understand: one process\nthat does cleanup work over time in a way that minimizes interference;\nand another process that activates in more urgent situations (perhaps\ndue to misconfiguration of the first process).\n\nBut we should be careful that we don't end up with more confusion. For\nsomething like that to work, we'd probably want the second process to\nnot be configurable at all, and we'd want it to be issuing WARNINGs\npointing to what might be misconfigured, and otherwise just be\ninvisible.\n\n> That way we wouldn't be fighting against the widely held perception\n> that antiwraparound autovacuums are scary.\n\nThere's certainly a terminology problem there. Just to brainstorm on\nsome new names, we might want to call it something like \"xid\nreclamation\" or \"xid horizon advancement\".\n\nWhen it starts to run out, we can use words like \"wraparound\" or\n\"exhaustion\".\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Sun, 23 Oct 2022 21:32:39 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Sun, Oct 23, 2022 at 9:32 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> It's possible this would be easier for users to understand: one process\n> that does cleanup work over time in a way that minimizes interference;\n> and another process that activates in more urgent situations (perhaps\n> due to misconfiguration of the first process).\n\nI think that the new \"early\" version of antiwraparound autovacuum\n(that can still be autocancelled) would simply be called autovacuum.\nIt wouldn't appear as \"autovacuum to prevent wraparound\" in places\nlike pg_stat_activity. For the most part users wouldn't have to care\nabout the difference between these autovacuums and traditional\nnon-antiwraparound autovacuums. They really would be exactly the same\nthing, so it would make sense if users typically noticed no difference\nwhatsoever (at least in contexts like pg_stat_activity).\n\n> But we should be careful that we don't end up with more confusion. For\n> something like that to work, we'd probably want the second process to\n> not be configurable at all, and we'd want it to be issuing WARNINGs\n> pointing to what might be misconfigured, and otherwise just be\n> invisible.\n\nThere should be some simple scheme for determining when an\nantiwraparound autovacuum (non-cancellable autovacuum to advance\nrelfrozenxid/relminmxid) should run (applied by the autovacuum.c\nscheduling logic). Something like \"table has attained an age that's\nnow 2x autovacuum_freeze_max_age, or 1/2 of vacuum_failsafe_age,\nwhichever is less\".\n\nThe really important thing is giving a regular/early autocancellable\nautovacuum triggered by age(relfrozenxid) *some* opportunity to run. I\nstrongly suspect that the exact details won't matter too much,\nprovided we manage to launch at least one such autovacuum before\nescalating to traditional antiwraparound autovacuum (which cannot be\nautocancelled). Even if regular/early autovacuum had just one\nopportunity to run to completion, we'd already be much better off. The\nhazards from blocking automated DDL in a way that leads to a very\ndisruptive traffic jam (like in the Joyent Manta postmortem) would go\nway down.\n\n> > That way we wouldn't be fighting against the widely held perception\n> > that antiwraparound autovacuums are scary.\n>\n> There's certainly a terminology problem there. Just to brainstorm on\n> some new names, we might want to call it something like \"xid\n> reclamation\" or \"xid horizon advancement\".\n\nI think that we should simply call it autovacuum. Under this scheme,\nantiwraparound autovacuum would be a qualitatively different kind of\noperation to users (though not to vacuumlazy.c), because it would not\nbe autocancellable in the standard way. And because users should take\nit as a signal that things aren't really working well (otherwise we\nwouldn't have reached the point of requiring a scary antiwraparound\nautovacuum in the first place). Right now antiwraparound autovacuums\nare both an emergency thing (or at least described as such in one or\ntwo areas of the source code), and a completely routine occurrence.\nThis is deeply confusing.\n\nSeparately, I plan on breaking out insert-triggered autovacuums from\ntraditional dead tuple triggered autovacuums [1], which creates a need\nto invent some kind of name to differentiate the new table age\ntriggering criteria from both insert-driven and dead tuple driven\nautovacuums. These are all fundamentally the same operations with the\nsame urgency to users, though. We'd only need to describe the\n*criteria* that *triggered* the autovacuum in our autovacuum log\nreport (actually we'd still report autovacuums aš antiwraparound\nautovacuum in cases where that still happened, which won't be\npresented as just another triggering criteria in the report).\n\n[1] https://www.postgresql.org/message-id/flat/CAH2-WznEqmkmry8feuDK8XdpH37-4anyGF7a04bWXOc1GKd0Yg@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Oct 2022 07:25:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, 2022-10-24 at 07:25 -0700, Peter Geoghegan wrote:\n> The really important thing is giving a regular/early autocancellable\n> autovacuum triggered by age(relfrozenxid) *some* opportunity to run.\n\n+1. That principle seems both reasonable from a system standpoint and\nunderstandable to a user.\n\n> Even if regular/early autovacuum had just one\n> opportunity to run to completion, we'd already be much better off.\n\nBy \"opportunity\", you mean that, regardless of configuration, the\ncancellable autovacuum would at least start; though it still might be\ncancelled by DDL. Right?\n\n> These are all fundamentally the same operations with the\n> same urgency to users, though. We'd only need to describe the\n> *criteria* that *triggered* the autovacuum in our autovacuum log\n> report\n\nHmm... I'm worried that could be a bit confusing depending on how it's\ndone. Let's be clear that it was merely the triggering criteria and\ndoesn't necessarily represent the work that is being done.\n\nThere are enough cases that it would be good to start a document and\noutline the end behavior that your patch series is designed to\naccomplish. In other words, a before/after of the interesting cases.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 24 Oct 2022 08:42:45 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 8:43 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Even if regular/early autovacuum had just one\n> > opportunity to run to completion, we'd already be much better off.\n>\n> By \"opportunity\", you mean that, regardless of configuration, the\n> cancellable autovacuum would at least start; though it still might be\n> cancelled by DDL. Right?\n\nYes, exactly.\n\nIt might be difficult as a practical matter to make sure that we\n*reliably* give autovacuum.c the opportunity to launch a \"standard\"\nautovacuum tasked with advancing relfrozenxid (just after\nautovacuum_freeze_max_age is first crossed) before the point that a\nscary antiwraparound autovacuum is launched. So we might end up giving\nit more XID slack than it's likely to ever need (say by only launching\na traditional antiwraparound autovacuum against tables that attain an\nage that is twice the value of autovacuum_freeze_max_age). These are\nall just details, though -- the important principle is that we try our\nutmost to give the less disruptive strategy a chance to succeed before\nconcluding that it has failed, and then \"escalating\" to a traditional\nantiwraparound autovacuum.\n\n> > These are all fundamentally the same operations with the\n> > same urgency to users, though. We'd only need to describe the\n> > *criteria* that *triggered* the autovacuum in our autovacuum log\n> > report\n>\n> Hmm... I'm worried that could be a bit confusing depending on how it's\n> done. Let's be clear that it was merely the triggering criteria and\n> doesn't necessarily represent the work that is being done.\n\nMaybe it could be broken out into a separate \"autovacuum triggered by:\n\" line, seen only in the autovacuum log instrumentation (and not in\nthe similar report output by a manual VACUUM VERBOSE). When we still\nend up \"escalating\" to an antiwraparound autovacuum, the \"triggered\nby:\" line would match whatever we'd show in the benign the\nnon-cancellable-but-must-advance-relfrozenxid autovacuum case. The\ndifference would be that we'd now be reporting on a different\noperation entirely (not just a regular autovacuum, a scary\nantiwraparound autovacuum).\n\n(Again, even these distinctions wouldn't be meaningful to vacuumlazy.c\nitself -- it would just need to handle the details around logging in a\nway that gave users the right idea. There wouldn't be any special\ndiscrete aggressive mode of operation anymore, assuming my big patch\nset gets into Postgres 16 too.)\n\n> There are enough cases that it would be good to start a document and\n> outline the end behavior that your patch series is designed to\n> accomplish. In other words, a before/after of the interesting cases.\n\nThat's on my TODO list. Mostly it's an independent thing to this\n(antiwraparound) autovacuum stuff, despite the fact that both projects\nshare the same underlying philosophy.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 24 Oct 2022 09:00:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 9:00 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Maybe it could be broken out into a separate \"autovacuum triggered by:\n> \" line, seen only in the autovacuum log instrumentation (and not in\n> the similar report output by a manual VACUUM VERBOSE). When we still\n> end up \"escalating\" to an antiwraparound autovacuum, the \"triggered\n> by:\" line would match whatever we'd show in the benign the\n> non-cancellable-but-must-advance-relfrozenxid autovacuum case. The\n> difference would be that we'd now be reporting on a different\n> operation entirely (not just a regular autovacuum, a scary\n> antiwraparound autovacuum).\n\nAttached WIP patch invents the idea of a regular autovacuum that is\ntasked with advancing relfrozenxid -- which is really just another\ntrigger criteria, reported on in the server log in its autovacuum\nreports. Of course we retain the idea of antiwraparound autovacuums.\nThe only difference is that they are triggered when table age has\nadvanced by twice the usual amount, which is presumably only possible\nbecause a regular autovacuum couldn't start or couldn't complete in\ntime (most likely due to continually being auto-cancelled).\n\nAs I said before, I think that the most important thing is to give\nregular autovacuuming a chance to succeed. The exact approach taken\nhas a relatively large amount of slack, but that probably isn't\nneeded. So the new antiwraparound cutoffs were chosen because they're\neasy to understand and remember, which is fairly arbitrary.\n\nAdding this to the upcoming CF.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 25 Nov 2022 14:47:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, 2022-11-25 at 14:47 -0800, Peter Geoghegan wrote:\n> Attached WIP patch invents the idea of a regular autovacuum that is\n> tasked with advancing relfrozenxid -- which is really just another\n> trigger criteria, reported on in the server log in its autovacuum\n> reports. Of course we retain the idea of antiwraparound autovacuums.\n> The only difference is that they are triggered when table age has\n> advanced by twice the usual amount, which is presumably only possible\n> because a regular autovacuum couldn't start or couldn't complete in\n> time (most likely due to continually being auto-cancelled).\n> \n> As I said before, I think that the most important thing is to give\n> regular autovacuuming a chance to succeed. The exact approach taken\n> has a relatively large amount of slack, but that probably isn't\n> needed. So the new antiwraparound cutoffs were chosen because they're\n> easy to understand and remember, which is fairly arbitrary.\n\nThe target is a table that receives no DML at all, right?\nI think that is a good idea.\nWouldn't it make sense to trigger that at *half* \"autovacuum_freeze_max_age\"?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 26 Nov 2022 18:58:20 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 9:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> The target is a table that receives no DML at all, right?\n\nSort of, but not really. The target is a table that doesn't get\nvacuumed for any other reason -- I don't make any claims beyond that\none.\n\nIt seems a little too optimistic to suppose that such a table really\ndidn't need any vacuuming to deal with bloat just because autovacuum.c\ndidn't seem to think that it did.\n\n> I think that is a good idea.\n> Wouldn't it make sense to trigger that at *half* \"autovacuum_freeze_max_age\"?\n\nThat would be equivalent to what I've done here, provided you also\ndouble the autovacuum_freeze_max_age setting. I did it this way\nbecause I believe that it has fewer problems. The approach I took\nmakes the general perception that antiwraparound autovacuum are a\nscary thing (really just needed for emergencies) become true, for the\nfirst time.\n\nWe should expect to see very few antiwraparound autovacuums with the\npatch, but when we do see even one it'll be after a less aggressive\napproach was given the opportunity to succeed, but (for whatever\nreason) failed. Just seeing any antiwraparound autovacuums will become\na useful signal of something being amiss in a way that it just isn't\nat the moment. The docs can be updated to talk about antiwraparound\nautovacuum as a thing that you should encounter approximately never.\nThis is possible even though the patch isn't invasive at all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 26 Nov 2022 11:00:22 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Sat, 2022-11-26 at 11:00 -0800, Peter Geoghegan wrote:\n\n> > I think that is a good idea.\n> > Wouldn't it make sense to trigger that at *half* \"autovacuum_freeze_max_age\"?\n> \n> That would be equivalent to what I've done here, provided you also\n> double the autovacuum_freeze_max_age setting.\n\nThat's exactly what I was trying to debate. Wouldn't it make sense to\ntrigger VACUUM earlier so that it has a chance of being less heavy?\nOn the other hand, if there are not sufficiently many modifications\non the table to trigger autovacuum, perhaps it doesn't matter in many\ncases.\n\n> I did it this way\n> because I believe that it has fewer problems. The approach I took\n> makes the general perception that antiwraparound autovacuum are a\n> scary thing (really just needed for emergencies) become true, for the\n> first time.\n> \n> We should expect to see very few antiwraparound autovacuums with the\n> patch, but when we do see even one it'll be after a less aggressive\n> approach was given the opportunity to succeed, but (for whatever\n> reason) failed.\n\nIs that really so much less aggressive? Will that autovacuum run want\nto process all pages that are not all-frozen? If not, it probably won't\ndo much good. If yes, it will be just as heavy as an anti-wraparound\nautovacuum (except that it won't block other sessions).\n\n> Just seeing any antiwraparound autovacuums will become\n> a useful signal of something being amiss in a way that it just isn't\n> at the moment. The docs can be updated to talk about antiwraparound\n> autovacuum as a thing that you should encounter approximately never.\n> This is possible even though the patch isn't invasive at all.\n\nTrue. On the other hand, it might happen that after this, people start\nworrying about normal autovacuum runs because they occasionally experience\na table age autovacuum that is much heavier than the other ones. And\nthey can no longer tell the reason, because it doesn't show up anywhere.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sun, 27 Nov 2022 17:54:26 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 8:54 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> That's exactly what I was trying to debate. Wouldn't it make sense to\n> trigger VACUUM earlier so that it has a chance of being less heavy?\n> On the other hand, if there are not sufficiently many modifications\n> on the table to trigger autovacuum, perhaps it doesn't matter in many\n> cases.\n\nMaybe. There is a deeper problem here, though: table age is a really\nterrible proxy for whether or not it's appropriate for VACUUM to\nfreeze preexisting all-visible pages. It's not obvious that half\nautovacuum_freeze_max_age is much better than\nautovacuum_freeze_max_age if your concern is avoiding getting too far\ninto debt on freezing. Afterall, this is debt that must be paid back\nby freezing some number of physical heap pages, which in general has\napproximately zero relationship with table age (we need physical units\nfor this, not logical units).\n\nThis is a long standing problem that I hope and expect will be fixed\nin 16, by my ongoing work to completely remove the concept of\naggressive mode VACUUM:\n\nhttps://commitfest.postgresql.org/40/3843/\n\nThis makes VACUUM care about both table age and the number of unfrozen\nheap pages (mostly the latter). It weighs everything at the start of\neach VACUUM, and decides on how it must advance relfrozenxid based on\nthe conditions in the table and the picture over time. Note that\nperformance stability is the main goal; we will not just keep\naccumulating unfrozen pages for no good reason. All of the behaviors\npreviously associated with aggressive mode are retained, but are\nindividually applied on a timeline that is attuned to the needs of the\ntable (we can still wait for a cleanup lock, but that happens much\nlater than the point that the same page first becomes eligible for\nfreezing, not at exactly the same time).\n\nIn short, \"aggressiveness\" becomes a continuous thing, rather than a\ndiscrete mode of operation, improving performance stability. We go\nback to having only one kind of lazy vacuum, which is how things\nworked prior to the introduction of the visibility map. (We did have\nantiwraparound autovacuums in 8.3, but we did not have\naggressive/scan_all VACUUMs at the time.)\n\n> Is that really so much less aggressive? Will that autovacuum run want\n> to process all pages that are not all-frozen? If not, it probably won't\n> do much good. If yes, it will be just as heavy as an anti-wraparound\n> autovacuum (except that it won't block other sessions).\n\nEven if we assume that my much bigger patch set won't make it into 16,\nit'll probably still be a good idea to do this in 16. I admit that I\nhaven't really given that question enough thought to be sure of that,\nthough. Naturally my goal is to get everything in. Hopefully I'll\nnever have to make that call.\n\nIt is definitely true that this patch is \"the autovacuum side\" of the\nwork from the other much larger patchset (which handles \"the VACUUM\nside\" of things). This antiwraparound patch should probably be\nconsidered in that context, even though it's theoretically independent\nwork. It just worked out that way.\n\n> True. On the other hand, it might happen that after this, people start\n> worrying about normal autovacuum runs because they occasionally experience\n> a table age autovacuum that is much heavier than the other ones. And\n> they can no longer tell the reason, because it doesn't show up anywhere.\n\nBut you can tell the reason, just by looking at the autovacuum log\nreports. The only thing you can't do is see \"(to prevent wraparound)\"\nin pg_stat_activity. That (and the autocancellation behavioral change)\nare the only differences.\n\nThe big picture is that users really will have no good reason to care\nvery much about autovacuums that were triggered to advance\nrelfrozenxid (at least in the common case where we haven't needed to\nmake them antiwraparound autovacuums). They could almost (though not\nquite) now be explained as \"an autovacuum that takes place because\nit's been a while since we did an autovacuum to deal with bloat and/or\ntuple inserts\". That will at least be reasonable if you assume all of\nthe patches get in.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 27 Nov 2022 10:34:28 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 2:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached WIP patch invents the idea of a regular autovacuum that is\n> tasked with advancing relfrozenxid -- which is really just another\n> trigger criteria, reported on in the server log in its autovacuum\n> reports.\n\nAttached is v2, which is just to fix bitrot. Well, mostly. I did make\none functional change in v2: the autovacuum server log reports now\nseparately report on table XID age and table MultiXactId age, each as\nits own distinct triggering condition.\n\nI've heard informal reports that the difference between antiwraparound\nautovacuums triggered by table XID age versus table MXID age can\nmatter a great deal. It isn't difficult to break out that detail\nanyway, so even if the distinction isn't interesting all that often we\nmight as well surface it to users.\n\nI still haven't made a start on the docs for this. I'm still not sure\nhow much work I should do on the docs in the scope of this project\nversus my project that deals with related issues in VACUUM itself. The\nexisting material from the \"Routine Vacuuming\" docs has lots of\nproblems, and figuring out how to approach fixing those problems seems\nkind of daunting right now.\n\n--\nPeter Geoghegan",
"msg_date": "Thu, 29 Dec 2022 19:01:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 7:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v2, which is just to fix bitrot.\n\nAttached is v3. We no longer apply vacuum_failsafe_age when\ndetermining the cutoff for antiwraparound autovacuuming -- the new\napproach is a bit simpler.\n\nThis is a fairly small change overall. Now any \"table age driven\"\nautovacuum will also be antiwraparound when its\nrelfrozenxid/relminmxid attains an age that's either double the\nrelevant setting (either autovacuum_freeze_max_age or\neffective_multixact_freeze_max_age), or 1 billion XIDs/MXIDs --\nwhichever is less.\n\nThat makes it completely impossible to disable antiwraparound\nprotections (the special antiwrap autocancellation behavior) for\ntable-age-driven autovacuums once table age exceeds 1 billion\nXIDs/MXIDs. It's still possible to increase autovacuum_freeze_max_age\nto well over a billion, of course. It just won't be possible to do\nthat while also avoiding the no-auto-cancellation behavior for those\nautovacuums that are triggered due to table age crossing the\nautovacuum_freeze_max_age/effective_multixact_freeze_max_age\nthreshold.\n\n--\nPeter Geoghegan",
"msg_date": "Sun, 8 Jan 2023 17:49:20 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-08 17:49:20 -0800, Peter Geoghegan wrote:\n> Teach autovacuum.c to launch \"table age\" autovacuums at the same point\n> that it previously triggered antiwraparound autovacuums. Antiwraparound\n> autovacuums are retained, but are only used as a true option of last\n> resort, when regular autovacuum has presumably tried and failed to\n> advance relfrozenxid (likely because the auto-cancel behavior kept\n> cancelling regular autovacuums triggered based on table age).\n\nI've also seen the inverse, with recent versions of postgres: Autovacuum can\nonly ever make progress if it's an anti-wraparound vacuum, because it'll\nalways get cancelled otherwise. I'm worried that substantially increasing the\ntime until an anti-wraparound autovacuum happens will lead to more users\nrunning into out-of-xid shutdowns.\n\nI don't think it's safe to just increase the time at which anti-wrap vacuums\nhappen to a hardcoded 1 billion.\n\nI'm also doubtful that it's ok to just make all autovacuums on relations with\nan age > 1 billion anti-wraparound ones. For people that use a large\nautovacuum_freeze_max_age that will be a rude awakening.\n\n\nI am all in favor for adding logic to trigger autovacuum based on the table\nage, without needing to reach autovacuum_freeze_max_age. It never made sense\nto me that we get to the \"emergency mode\" in entirely normal operation. But\nI'm not in favor of just entirely reinterpreting existing GUCs and adding\nimportant thresholds as hardcoded numbers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 17:22:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 5:22 PM Andres Freund <andres@anarazel.de> wrote:\n> I've also seen the inverse, with recent versions of postgres: Autovacuum can\n> only ever make progress if it's an anti-wraparound vacuum, because it'll\n> always get cancelled otherwise. I'm worried that substantially increasing the\n> time until an anti-wraparound autovacuum happens will lead to more users\n> running into out-of-xid shutdowns.\n>\n> I don't think it's safe to just increase the time at which anti-wrap vacuums\n> happen to a hardcoded 1 billion.\n\nThat's not what the patch does. It doubles the time that the anti-wrap\nno-autocancellation behaviors kick in, up to a maximum of 1 billion\nXIDs/MXIDs. So it goes from autovacuum_freeze_max_age to\nautovacuum_freeze_max_age x 2, without changing the basic fact that we\ninitially launch autovacuums that advance relfrozenxid/relminmxid when\nthe autovacuum_freeze_max_age threshold is first crossed.\n\nThese heuristics are totally negotiable -- and likely should be\nthought out in more detail. It's likely that most of the benefit of\nthe patch comes from simply trying to advance relfrozenxid without the\nspecial auto-cancellation behavior one single time. The main problem\nright now is that the threshold that launches most antiwraparound\nautovacuums is exactly the same as the threshold that activates the\nauto-cancellation protections. Even doing the latter very slightly\nlater than the former could easily make things much better, while\nadding essentially no risk of the kind you're concerned about.\n\n> I'm also doubtful that it's ok to just make all autovacuums on relations with\n> an age > 1 billion anti-wraparound ones. For people that use a large\n> autovacuum_freeze_max_age that will be a rude awakening.\n\nActually, users that have autovacuum_freeze_max_age set to over 1\nbillion will get exactly the same behavior as before (except that the\ninstrumentation of autovacuum will be better). It'll be identical.\n\nIf you set autovacuum_freeze_max_age to 2 billion, and a \"standard\"\nautovacuum is launched on a table whose relfrozenxid age is 1.5\nbillion, it'll just be a regular dead tuples/inserted tuples\nautovacuum, with the same old familiar locking characteristics as\ntoday.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Jan 2023 17:40:06 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 8:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That's not what the patch does. It doubles the time that the anti-wrap\n> no-autocancellation behaviors kick in, up to a maximum of 1 billion\n> XIDs/MXIDs. So it goes from autovacuum_freeze_max_age to\n> autovacuum_freeze_max_age x 2, without changing the basic fact that we\n> initially launch autovacuums that advance relfrozenxid/relminmxid when\n> the autovacuum_freeze_max_age threshold is first crossed.\n\nI'm skeptical about this kind of approach.\n\nI do agree that it's good to slowly increase the aggressiveness of\nVACUUM as we get further behind, rather than having big behavior\nchanges all at once, but I think that should happen by smoothly\nvarying various parameters rather than by making discrete behavior\nchanges at a whole bunch of different times. For instance, when VACUUM\ngoes into emergency mode, it stops respecting the vacuum delay. I\nthink that's great, but it happens all at once, and maybe it would be\nbetter if it didn't. We could consider gradually ramping the vacuum\ndelay from 100% down to 0% instead of having it happen all at once.\nMaybe that's not the right idea, I don't know, and a naive\nimplementation might be worse than nothing, but I think it has some\nchance of being worth consideration.\n\nBut what the kind of change you're proposing here does is create\nanother threshold where the behavior changes suddenly, and I think\nthat's challenging from the point of view of understanding the\nbehavior of the system. The behavior already changes when you hit\nvacuum_freeze_min_age and then again when you hit\nvacuum_freeze_table_age and then there's also\nautoovacuum_freeze_max_age and xidWarnLimit and xidStopLimit and a few\nothers, and these setting all interact in pretty complex ways. The\nmore conditional logic we add to that, the harder it becomes to\nunderstand what's actually happening. You see a system where\nage(relfrozenxid) = 673m and you need a calculator and a spreadsheet\nto figure out what the vacuum behavior is at that point. Honestly, I\nthink we already have a problem with the behaviors here being too\ncomplex for normal human beings to understand them, and I think that\nthe kinds of changes you are proposing here could make that quite a\nbit worse.\n\nNow, you might reply to the above by saying, well, some behaviors\ncan't vary continuously. vacuum_cost_limit can perhaps be phased out\ngradually, but autocancellation seems like something that you must\neither do, or not do. I would agree with that. But what I'm saying is\nthat we ought to favor having those kinds of behaviors all engage at\nthe same point rather than at different times. I'm not saying that\nthere can't ever be good reasons to separate out different behaviors\nand have the engage at different times, but I think we will end up\nbetter off if we minimize that sort of thing as much as we reasonably\ncan. In your opening email you write \"Why should the\nPROC_VACUUM_FOR_WRAPAROUND behavior happen on *exactly* the same\ntimeline as the one used to launch an antiwraparound autovacuum,\nthough?\" and my answer is \"because that's easier to understand and I\ndon't see that it has much of a downside.\"\n\nI did take a look at the post-mortem to which you linked, but I am not\nquite sure how that bears on the behavior change under discussion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 12:12:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I do agree that it's good to slowly increase the aggressiveness of\n> VACUUM as we get further behind, rather than having big behavior\n> changes all at once, but I think that should happen by smoothly\n> varying various parameters rather than by making discrete behavior\n> changes at a whole bunch of different times.\n\nIn general I tend to agree, but, as you go on to acknowledge yourself,\nthis particular behavior is inherently discrete. Either the\nPROC_VACUUM_FOR_WRAPAROUND behavior is in effect, or it isn't.\n\nIn many important cases the only kind of autovacuum that ever runs\nagainst a certain big table is antiwraparound autovacuum. And\ntherefore every autovacuum that runs against the table must\nnecessarily not be auto cancellable. These are the cases where we see\ndisastrous interactions with automated DDL, such as a TRUNCATE run by\na cron job (to stop those annoying antiwraparound autovacuums) -- a\nheavyweight lock traffic jam that causes the application to lock up.\n\nAll that I really want to do here is give an autovacuum that *can* be\nauto cancelled *some* non-zero chance to succeed with these kinds of\ntables. TRUNCATE completes immediately, so the AEL is no big deal.\nExcept when it's blocked behind an antiwraparound autovacuum. That\nkind of interaction is occasionally just disastrous. Even just the\ntiniest bit of wiggle room could avoid it in most cases, possibly even\nalmost all cases.\n\n> Maybe that's not the right idea, I don't know, and a naive\n> implementation might be worse than nothing, but I think it has some\n> chance of being worth consideration.\n\nIt's a question of priorities. The failsafe isn't supposed to be used\n(when it is it is a kind of a failure), and so presumably only kicks\nin on very rare occasions, where nobody was paying attention anyway.\nSo far I've heard no complaints about this, but I've heard lots of\ncomplaints about the antiwrap autocancellation behavior.\n\n> The behavior already changes when you hit\n> vacuum_freeze_min_age and then again when you hit\n> vacuum_freeze_table_age and then there's also\n> autoovacuum_freeze_max_age and xidWarnLimit and xidStopLimit and a few\n> others, and these setting all interact in pretty complex ways. The\n> more conditional logic we add to that, the harder it becomes to\n> understand what's actually happening.\n\nIn general I strongly agree. In fact that's a big part of what\nmotivates my ongoing work on VACUUM. The user experience is central.\n\nAs Andres pointed out, presenting antiwraparound autovacuums as kind\nof an emergency thing but also somehow a routine thing is just\nhorribly confusing. I want to make them into an emergency thing in\nevery sense -- something that you as a user can reasonably expect to\nnever see (like the failsafe). But if you do see one, then that's a\nuseful signal of an underlying problem with contention, say from\nautomated DDL that pathologically cancels autovacuums again and again.\n\n> Now, you might reply to the above by saying, well, some behaviors\n> can't vary continuously. vacuum_cost_limit can perhaps be phased out\n> gradually, but autocancellation seems like something that you must\n> either do, or not do. I would agree with that. But what I'm saying is\n> that we ought to favor having those kinds of behaviors all engage at\n> the same point rather than at different times.\n\nRight now aggressive VACUUMs do just about all freezing at the same\ntime, to the extent that many users seem to think that it's a totally\ndifferent thing with totally different responsibilities to regular\nVACUUM. Doing everything at the same time like that causes huge\npractical problems, and is very confusing.\n\nI think that users will really appreciate having only one kind of\nVACUUM/autovacuum (since the other patch gets rid of discrete\naggressive mode VACUUMs). I want \"table age autovacuuming\" (as I\npropose to call it) come to be seen as not any different to any other\nautovacuum, such as an \"insert tuples\" autovacuum or a \"dead tuples\"\nautovacuum. The difference is only in how autovacuum.c triggers the\nVACUUM, not in any runtime behavior. That's an important goal here.\n\n> I did take a look at the post-mortem to which you linked, but I am not\n> quite sure how that bears on the behavior change under discussion.\n\nThe post-mortem involved a single \"DROP TRIGGER\" that caused chaos\nwhen it interacted with the auto cancellation behavior. It would\nusually completely instantly, so the AEL wasn't actually disruptive,\nbut one day antiwraparound autovacuum made the cron job effectively\nblock all reads and writes for hours.\n\nThe similar outages I was called in to help with personally had either\nan automated TRUNCATE or an automated CREATE INDEX. Had autovacuum\nonly been willing to yield once or twice, then it probably would have\nbeen fine -- the situation probably would have worked itself out\nnaturally. That's the best outcome you can hope for.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:22:14 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 2:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> All that I really want to do here is give an autovacuum that *can* be\n> auto cancelled *some* non-zero chance to succeed with these kinds of\n> tables. TRUNCATE completes immediately, so the AEL is no big deal.\n> Except when it's blocked behind an antiwraparound autovacuum. That\n> kind of interaction is occasionally just disastrous. Even just the\n> tiniest bit of wiggle room could avoid it in most cases, possibly even\n> almost all cases.\n\nI doubt it. Wiggle room that's based on the XID threshold being\ndifferent for one behavior vs. another can easily fail to produce any\nbenefit, because there's no guarantee that the autovacuum launcher\nwill ever try to launch a worker against that table while the XID is\nin the range where you'd get one behavior and not the other. I've\nlong thought that the fact that vacuum_freeze_table_age is documented\nas capped at 0.95 * autovacuum_freeze_max_age is silly for just this\nreason. The interval that you're proposing is much wider so the\nchances of getting a benefit are greater, but supposing that it's\ngoing to solve it in most cases seems like an exercise in unwarranted\noptimism.\n\nIn fact, I would guess that in fact it will very rarely solve the\nproblem. Normally, the XID age of a table never reaches\nautovacuum_freeze_max_age in the first place. If it does, there's some\nreason. Maybe there's a really old open transaction or an abandon\nreplication slot or an unresolved 2PC transaction. Maybe the\nautovacuum system is overloaded and no table is getting visited\nregularly because the system just can't keep up. Or maybe there are\nregular AELs being taken on the table at issue. If there's only an AEL\ntaken against a table once in blue moon, some autovacuum attempt ought\nto succeed before we reach autovacuum_freeze_max_age. Flipping that\naround, if we reach autovacuum_freeze_max_age without advancing\nrelfrozenxid, and an AEL shows up behind us in the lock queue, it's\nreally likely that the reason *why* we've reached\nautovacuum_freeze_max_age is that this same thing has happened to\nevery previous autovacuum attempt and they all cancelled themselves.\nIf we cancel ourselves too, we're just postponing resolution of the\nproblem to some future point when we decide to stop cancelling\nourselves. That's not a win.\n\n> I think that users will really appreciate having only one kind of\n> VACUUM/autovacuum (since the other patch gets rid of discrete\n> aggressive mode VACUUMs). I want \"table age autovacuuming\" (as I\n> propose to call it) come to be seen as not any different to any other\n> autovacuum, such as an \"insert tuples\" autovacuum or a \"dead tuples\"\n> autovacuum. The difference is only in how autovacuum.c triggers the\n> VACUUM, not in any runtime behavior. That's an important goal here.\n\nI don't agree with that goal. I think that having different kinds of\nautovacuums with different, identifiable names and corresponding,\neasily-identifiable behaviors is really important for troubleshooting.\nTrying to remove those distinctions and make everything look the same\nwill not keep autovacuum from getting itself into trouble. It will\njust make it harder to understand what's happening when it does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 16:08:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 1:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I doubt it. Wiggle room that's based on the XID threshold being\n> different for one behavior vs. another can easily fail to produce any\n> benefit, because there's no guarantee that the autovacuum launcher\n> will ever try to launch a worker against that table while the XID is\n> in the range where you'd get one behavior and not the other.\n\nOf course it's true that in general it might not succeed in\nforestalling the auto cancellation behavior. You can say something\nsimilar about approximately anything like this. For example, there is\nno absolute guarantee that any autovacuum will ever complete. But we\nstill try!\n\n> I've long thought that the fact that vacuum_freeze_table_age is documented\n> as capped at 0.95 * autovacuum_freeze_max_age is silly for just this\n> reason. The interval that you're proposing is much wider so the\n> chances of getting a benefit are greater, but supposing that it's\n> going to solve it in most cases seems like an exercise in unwarranted\n> optimism.\n\nI don't claim to be dealing in certainties, especially about the final\noutcome. Whether or not you accept my precise claim is perhaps not\nimportant, in the end. What is important is that we give things a\nchance to succeed, based on the information that we have available,\nwith a constant eye towards avoiding disaster scenarios.\n\nSome of the problems with VACUUM seem to be cases where VACUUM takes\non a potentially ruinous obligation, that it cannot possibly meet in\nsome rare cases that do come up sometimes -- like the cleanup lock\nbehavior. Is a check for $1000 written by me really worth less than a\ncheck written by me for a billion dollars? They're both nominally\nequivalent guarantees about an outcome, after all, though one has a\nfar greater monetary value. Which would you value more, subjectively?\n\nNothing is guaranteed -- even (and perhaps especially) strong guarantees.\n\n> In fact, I would guess that in fact it will very rarely solve the\n> problem. Normally, the XID age of a table never reaches\n> autovacuum_freeze_max_age in the first place. If it does, there's some\n> reason.\n\nProbably, but none of this matters at all if the table age never\nreaches autovacuum_freeze_max_age in the first place. We're only\ntalking about tables where that isn't the case, by definition.\nEverything else is out of scope here.\n\n> Maybe there's a really old open transaction or an abandon\n> replication slot or an unresolved 2PC transaction. Maybe the\n> autovacuum system is overloaded and no table is getting visited\n> regularly because the system just can't keep up. Or maybe there are\n> regular AELs being taken on the table at issue.\n\nMaybe an asteroid hits the datacenter, making all of these\nconsiderations irrelevant. But perhaps it won't!\n\n> If there's only an AEL\n> taken against a table once in blue moon, some autovacuum attempt ought\n> to succeed before we reach autovacuum_freeze_max_age. Flipping that\n> around, if we reach autovacuum_freeze_max_age without advancing\n> relfrozenxid, and an AEL shows up behind us in the lock queue, it's\n> really likely that the reason *why* we've reached\n> autovacuum_freeze_max_age is that this same thing has happened to\n> every previous autovacuum attempt and they all cancelled themselves.\n\nWhy do you assume that a previous autovacuum ever got launched in the\nfirst place? There is always going to be a certain kind of table that\ncan only get an autovacuum when its table age crosses\nautovacuum_freeze_max_age. And it's not just static tables -- there is\nvery good reason to have doubts about the statistics that drive\nautovacuum. Plus vacuum_freeze_table_age works very unreliably (which\nis why my big VACUUM patch more or less relegates it to a\ncompatibility option, while retaining a more sophisticated notion of\ntable age creating pressure to advance relfrozenxid).\n\nUnder the scheme from this autovacuum patch, it really does become\nreasonable to make a working assumption that there was a previous\nautovacuum, that failed (likely due to the autocancellation behavior,\nas you said). We must have tried and failed in an earlier autovacuum,\nonce we reach the point of needing an antiwraparound autovacuum\n(meaning a table age autovacuum which cannot be autocancelled) --\nwhich is not the case today at all. If nothing else, table age\nautovacuums will have been scheduled much earlier on -- they will have\nat least started up, barring pathological cases.\n\nThat's a huge difference in the strength of the signal, compared to today.\n\nThe super aggressive autocancellation behavior is actually\nproportionate to the problem at hand. Kind of like how if you go to\nthe doctor and tell them you have a headache, they don't schedule you\nfor emergency brain surgery. What they do is tell you to take an\naspirin, and make sure that you stay well hydrated -- if the problem\ndoesn't go away after a few days, then call back, reassess. Perhaps it\nreally will be a brain tumor, but there is nothing to gain and\neverything to lose by taking such drastic action at the first sign of\ntrouble.\n\n> If we cancel ourselves too, we're just postponing resolution of the\n> problem to some future point when we decide to stop cancelling\n> ourselves. That's not a win.\n\nIt's also only a very minor loss, relative to what would have happened\nwithout any of this. This is something that we can be relatively sure\nof (unlike anything about final outcomes). It's clear that we have a\nlot to gain. What do we have to lose, really?\n\n> > I think that users will really appreciate having only one kind of\n> > VACUUM/autovacuum (since the other patch gets rid of discrete\n> > aggressive mode VACUUMs). I want \"table age autovacuuming\" (as I\n> > propose to call it) come to be seen as not any different to any other\n> > autovacuum, such as an \"insert tuples\" autovacuum or a \"dead tuples\"\n> > autovacuum. The difference is only in how autovacuum.c triggers the\n> > VACUUM, not in any runtime behavior. That's an important goal here.\n>\n> I don't agree with that goal. I think that having different kinds of\n> autovacuums with different, identifiable names and corresponding,\n> easily-identifiable behaviors is really important for troubleshooting.\n\nYou need to distinguish between different types of autovacuums and\ndifferent types of VACUUMs here. Sure, it's valuable to have\ninformation about why autovacuum launched a VACUUM, and the patch\ngreatly improves that. But runtime behavior is another story.\n\nIt's not really generic behavior -- more like generic policies that\nproduce different behavior under different runtime conditions. VACUUM\nhas always had generic policies about how to do things, at least up\nuntil the introduction of the visibility map, which added\nscan_all/aggressive VACUUMs, and the vacuum_freeze_table_age GUC. The\npolicy should be the same in every VACUUM, which the behavior itself\nemerges from.\n\n> Trying to remove those distinctions and make everything look the same\n> will not keep autovacuum from getting itself into trouble. It will\n> just make it harder to understand what's happening when it does.\n\nThe point isn't to have every VACUUM behave in the same way. The point\nis to make decisions dynamically, based on the observed conditions in\nthe table. And to delay committing to things until there really is no\nalternative, to maximize our opportunities to avoid disaster. In\nshort: loose, springy behavior.\n\nImposing absolute obligations on VACUUM has the potential to create\nlots of problems. It is sometimes necessary, but can easily be\noverused, making a bad situation much worse.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 12 Jan 2023 14:12:31 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-12 16:08:06 -0500, Robert Haas wrote:\n> Normally, the XID age of a table never reaches autovacuum_freeze_max_age in\n> the first place.\n\nThat's not at all my experience. I often see it being the primary reason for\nautovacuum vacuuming large tables on busy OLTP systems. Even without any\nlongrunning transactions or such, with available autovac workers and without\nearlier autovacuums getting interrupted by locks. Once a table is large,\nreasonable scale factors require a lot of changes to accumulate to trigger an\nautovacuum, and during a vacuum a lot of transactions complete, leading to\nlarge tables having a significant age by the time autovac finishes.\n\nThe most common \"bad\" reason for reaching autovacuum_freeze_max_age that I see\nis cost limits not allowing vacuum to complete on time.\n\n\nPerhaps we should track how often autovacuum was triggered by what reason in\na relation's pgstats? That'd make it a lot easier to collect data, both for\ntuning the thresholds on a system, and for hacking on postgres.\n\nTracking the number of times autovacuum was interruped due to a lock request\nmight be a good idea as well?\n\n\nI think it'd be a good idea to split off the part of the patch that introduces\nAutoVacType / adds logging for what triggered. That's independently useful,\nlikely uncontroversial and makes the remaining patch smaller.\n\nI'd also add the trigger to the pg_stat_activity entry for the autovac\nworker. Additionally I think we should add information about using failsafe\nmode to the p_s_a entry.\n\n\nI've wished for a non-wraparound, xid age based, \"autovacuum trigger\" many\ntimes, FWIW. And I've seen plenty of places write their own userspace version\nof it, because without it they run into trouble. However, I don't like that\nthe patch infers the various thresholds using magic constants / multipliers.\n\n\nautovacuum_freeze_max_age is really a fairly random collection of things:\n1) triggers autovacuum on tables based on age, in addition to the dead tuple /\n inserted tuples triggers\n2) prevents auto-cancellation of autovacuum\n3) starts autovacuum, even if autovacuum is disabled\n\nIME hitting 1) isn't a reason for concern, it's perfectly normal. Needing 2)\nto make progress is a bit more concerning. 3) should rarely be needed, but is\na good safety mechanism.\n\nI doubt that controlling all of them via one GUC is sensible.\n\n\nIf I understand the patch correctly, we now have the following age based\nthresholds for av:\n\n- force-enable autovacuum:\n oldest_datfrozenxid + autovacuum_freeze_max_age < nextXid\n- autovacuum based on age:\n freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n tableagevac = relfrozenxid < recentXid - freeze_max_age\n- prevent auto-cancellation:\n freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n prevent_auto_cancel_age = Min(freeze_max_age * 2, 1 billion)\n prevent_auto_cancel = reflrozenxid < recentXid - prevent_auto_cancel_age\n\nIs that right?\n\n\nOne thing I just noticed: Isn't it completely bonkers that we compute\nrecentXid/recentMulti once at the start of a worker in\nrelation_needs_vacanalyze()? That's fine for the calls in do_autovacuum()'s\ninitial loops over all tables. But seems completely wrong for the later calls\nvia table_recheck_autovac() -> recheck_relation_needs_vacanalyze() ->\nrelation_needs_vacanalyze()?\n\nThese variables really shouldn't be globals. It makes sense to cache them\nlocally in do_autovacuum(), but reusing them\nrecheck_relation_needs_vacanalyze() and sharing it between launcher and worker\nis bad.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 13:59:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 2:00 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-12 16:08:06 -0500, Robert Haas wrote:\n> > Normally, the XID age of a table never reaches autovacuum_freeze_max_age in\n> > the first place.\n>\n> That's not at all my experience. I often see it being the primary reason for\n> autovacuum vacuuming large tables on busy OLTP systems. Even without any\n> longrunning transactions or such, with available autovac workers and without\n> earlier autovacuums getting interrupted by locks. Once a table is large,\n> reasonable scale factors require a lot of changes to accumulate to trigger an\n> autovacuum, and during a vacuum a lot of transactions complete, leading to\n> large tables having a significant age by the time autovac finishes.\n\nI've definitely seen this. I've also noticed that TPC-C's stock and\ncustomer tables (both of which are subject to many HOT updates) only\never receive antiwraparound autovacuums, even with my very aggressive\nautovacuum settings.\n\nOverall, I think that it's quite common among the largest tables, even\nwhen things are running normally.\n\n> The most common \"bad\" reason for reaching autovacuum_freeze_max_age that I see\n> is cost limits not allowing vacuum to complete on time.\n\nThere are many problems with the statistics driving this whole\nprocess, that I won't rehash right now. I actually think that the\nwhole idea of relying on statistical sampling for dead tuples is\nfundamentally just bogus (that's not how statistics work in general),\nthough I have quite a few less fundamental and more concrete\ncomplaints about the statistics just being wrong on their own terms.\n\n> Perhaps we should track how often autovacuum was triggered by what reason in\n> a relation's pgstats? That'd make it a lot easier to collect data, both for\n> tuning the thresholds on a system, and for hacking on postgres.\n\nThat would definitely be useful.\n\n> Tracking the number of times autovacuum was interruped due to a lock request\n> might be a good idea as well?\n\nAlso useful information worth having.\n\n> I think it'd be a good idea to split off the part of the patch that introduces\n> AutoVacType / adds logging for what triggered. That's independently useful,\n> likely uncontroversial and makes the remaining patch smaller.\n\nI like that idea.\n\nAttached revision v4 breaks things up mechanically, along those lines\n(no real changes to the code inself, though). The controversial parts\nof the patch are indeed a fairly small proportion of the total\nchanges.\n\n> I'd also add the trigger to the pg_stat_activity entry for the autovac\n> worker. Additionally I think we should add information about using failsafe\n> mode to the p_s_a entry.\n\nI agree that that's all useful, but it seems like it can be treated as\nlater work.\n\n> I've wished for a non-wraparound, xid age based, \"autovacuum trigger\" many\n> times, FWIW. And I've seen plenty of places write their own userspace version\n> of it, because without it they run into trouble. However, I don't like that\n> the patch infers the various thresholds using magic constants / multipliers.\n\nAs I said, these details are totally negotiable, and likely could be a\nlot better without much effort.\n\nWhat are your concerns about the thresholds? For example, is it that\nyou can't configure the behavior directly at all? Something else?\n\n> autovacuum_freeze_max_age is really a fairly random collection of things:\n> 1) triggers autovacuum on tables based on age, in addition to the dead tuple /\n> inserted tuples triggers\n> 2) prevents auto-cancellation of autovacuum\n> 3) starts autovacuum, even if autovacuum is disabled\n>\n> IME hitting 1) isn't a reason for concern, it's perfectly normal. Needing 2)\n> to make progress is a bit more concerning. 3) should rarely be needed, but is\n> a good safety mechanism.\n>\n> I doubt that controlling all of them via one GUC is sensible.\n\nI agree, of course, but just to be clear: I don't think it matters\nthat we couple together 1 and 3. In fact it's good that we do that,\nbecause the point to the user is that they cannot disable table-age\n(i.e. what we current call antiwraparound) autovacuums -- that just\nmakes sense.\n\nThe only problem that I see is that item 2 is tied to the other items\nfrom your list.\n\n> If I understand the patch correctly, we now have the following age based\n> thresholds for av:\n>\n> - force-enable autovacuum:\n> oldest_datfrozenxid + autovacuum_freeze_max_age < nextXid\n> - autovacuum based on age:\n> freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> tableagevac = relfrozenxid < recentXid - freeze_max_age\n> - prevent auto-cancellation:\n> freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> prevent_auto_cancel_age = Min(freeze_max_age * 2, 1 billion)\n> prevent_auto_cancel = reflrozenxid < recentXid - prevent_auto_cancel_age\n>\n> Is that right?\n\nThat summary looks accurate, but I'm a bit confused about why you're\nasking the question this way. I thought that it was obvious that the\npatch doesn't change most of these things.\n\nThe only mechanism that the patch changes is related to \"prevent\nauto-cancellation\" behaviors -- which is now what the term\n\"antiwraparound\" refers to. It does change the name of \"autovacuum\nbased on age\", though -- the name is now \"table age autovacuum\" (the\nold name was antiwraparound autovacuum, of course). As I pointed out\nto you already, it's mechanically impossible for any autovacuum to be\nantiwraparound unless it's an XID table age/MXID table age autovacuum.\n\nThe naming convention I propose here makes it a little confusing for\nus to discuss, but it seems like the best thing for users. Users'\nbasic intuitions about antiwraparound autovacuums (that they're scary\nthings needed because wraparound is starting to become a real concern)\ndon't need to change. If anything they become more accurate, because\nantiwraparound autovacuums become non-routine -- which is really how\nit should have been when autovacuum was first added IMV. Users have\nrather good reasons to find antiwraparound autovacuums scary, even\nthough that's kind of wrong (it's really our fault for making it so\nconfusing for them, not their fault for being wrong).\n\n> One thing I just noticed: Isn't it completely bonkers that we compute\n> recentXid/recentMulti once at the start of a worker in\n> relation_needs_vacanalyze()? That's fine for the calls in do_autovacuum()'s\n> initial loops over all tables. But seems completely wrong for the later calls\n> via table_recheck_autovac() -> recheck_relation_needs_vacanalyze() ->\n> relation_needs_vacanalyze()?\n>\n> These variables really shouldn't be globals. It makes sense to cache them\n> locally in do_autovacuum(), but reusing them\n> recheck_relation_needs_vacanalyze() and sharing it between launcher and worker\n> is bad.\n\nI am not sure. I do hope that there isn't some subtle way in which the\ndesign relies on that. It seems obviously weird, and so I have to\nwonder if there is a reason behind it that isn't immediately apparent.\n\n--\nPeter Geoghegan",
"msg_date": "Fri, 13 Jan 2023 16:13:45 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 16:13:45 -0800, Peter Geoghegan wrote:\n> On Fri, Jan 13, 2023 at 2:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think it'd be a good idea to split off the part of the patch that introduces\n> > AutoVacType / adds logging for what triggered. That's independently useful,\n> > likely uncontroversial and makes the remaining patch smaller.\n>\n> I like that idea.\n\nCool.\n\n\n> Attached revision v4 breaks things up mechanically, along those lines\n> (no real changes to the code inself, though). The controversial parts\n> of the patch are indeed a fairly small proportion of the total\n> changes.\n\nI don't think the split is right. There's too much in 0001 - it's basically\nintroducing the terminology of 0002 already. Could you make it a much more\nminimal change?\n\n\n> > I'd also add the trigger to the pg_stat_activity entry for the autovac\n> > worker. Additionally I think we should add information about using failsafe\n> > mode to the p_s_a entry.\n>\n> I agree that that's all useful, but it seems like it can be treated as\n> later work.\n\nIDK, it splits up anti-wraparound vacuums into different sub-kinds but doesn't\nallow to distinguish most of them from a plain autovacuum.\n\nSeems pretty easy to display the trigger from 0001 in\nautovac_report_activity()? You'd have to move the AutoVacType -> translated\nstring mapping into a separate function. That seems like a good idea anyway,\nthe current coding makes translators translate several largely identical\nstrings that just differ in one part.\n\n\n\n> > I've wished for a non-wraparound, xid age based, \"autovacuum trigger\" many\n> > times, FWIW. And I've seen plenty of places write their own userspace version\n> > of it, because without it they run into trouble. However, I don't like that\n> > the patch infers the various thresholds using magic constants / multipliers.\n>\n> As I said, these details are totally negotiable, and likely could be a\n> lot better without much effort.\n>\n> What are your concerns about the thresholds? For example, is it that\n> you can't configure the behavior directly at all? Something else?\n\nThe above, but mainly that\n\n> > freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> > prevent_auto_cancel_age = Min(freeze_max_age * 2, 1 billion)\n> > prevent_auto_cancel = reflrozenxid < recentXid - prevent_auto_cancel_age\n\nseems quite confusing / non-principled. What's the logic behind the auto\ncancel threshold being 2 x freeze_max_age, except that when freeze_max_age is\n1 billion, the cutoff is set to 1 billion? That just makes no sense to me.\n\n\nMaye I'm partially confused by the Min(freeze_max_age * 2, 1 billion). As you\npointed out in [1], that doesn't actually lower the threshold for \"table age\"\nvacuums, because we don't even get to that portion of the code if we didn't\nalready cross freeze_max_age.\n\n if (freeze_max_age < ANTIWRAPAROUND_MAX_AGE)\n freeze_max_age *= 2;\n freeze_max_age = Min(freeze_max_age, ANTIWRAPAROUND_MAX_AGE);\n\nYou're lowering a large freeze_max_age to ANTIWRAPAROUND_MAX_AGE - but it only\nhappens after all other checks of freeze_max_age, so it won't influence\nthose. That's confusing code.\n\nI think this'd be a lot more readable if you introduced a separate variable\nfor the \"no-cancel\" threshold, rather than overloading freeze_max_age with\ndifferent meanings. And you should remove the confusing \"lowering\" of the\ncutoff. Maybe something like\n\n no_cancel_age = freeze_max_age;\n if (no_cancel_age < ANTIWRAPAROUND_MAX_AGE)\n {\n /* multiply by two, but make sure to not exceed ANTIWRAPAROUND_MAX_AGE */\n no_cancel_age = Min((uint32)ANTIWRAPAROUND_MAX_AGE, (uint32)no_cancel_age * 2);\n }\n\nThe uint32 bit isn't needed with ANTIWRAPAROUND_MAX_AGE at 1 billion, but at\n1.2 it would be needed, so it seems better to have it.\n\n\nThat still doesn't explain why we the cancel_age = freeze_max_age * 2\nbehaviour should be clamped at 1 billion, though.\n\n\n\n> > If I understand the patch correctly, we now have the following age based\n> > thresholds for av:\n> >\n> > - force-enable autovacuum:\n> > oldest_datfrozenxid + autovacuum_freeze_max_age < nextXid\n> > - autovacuum based on age:\n> > freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> > tableagevac = relfrozenxid < recentXid - freeze_max_age\n> > - prevent auto-cancellation:\n> > freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> > prevent_auto_cancel_age = Min(freeze_max_age * 2, 1 billion)\n> > prevent_auto_cancel = reflrozenxid < recentXid - prevent_auto_cancel_age\n> >\n> > Is that right?\n>\n> That summary looks accurate, but I'm a bit confused about why you're\n> asking the question this way. I thought that it was obvious that the\n> patch doesn't change most of these things.\n\nFor me it was helpful to clearly list the triggers when thinking about the\nissue. I found the diff hard to read and, as noted above, the logic for the\nauto cancel threshold quite confusing, so ...\n\n\n> The only mechanism that the patch changes is related to \"prevent\n> auto-cancellation\" behaviors -- which is now what the term\n> \"antiwraparound\" refers to.\n\nNot sure that redefining what a long-standing name refers to is helpful. It\nmight be best to retire it and come up with new names.\n\n\n> It does change the name of \"autovacuum based on age\", though -- the name is\n> now \"table age autovacuum\" (the old name was antiwraparound autovacuum, of\n> course). As I pointed out to you already, it's mechanically impossible for\n> any autovacuum to be antiwraparound unless it's an XID table age/MXID table\n> age autovacuum.\n\nThinking about it a bit more, my problem with the current anti-wraparound logic boil\ndown to a few different aspects:\n\n1) It regularly scares the crap out of users, even though it's normal. This\n is further confounded by failsafe autovacuums, where a scared reaction is\n appropriate, not being visible in pg_stat_activity.\n\n I suspect that learning that \"vacuum to prevent wraparound\" isn't a\n problem, contributes to people later ignoring \"must be vacuumed within ...\"\n WARNINGS, which I've seen plenty times.\n\n2) It makes it hard to DROP, TRUNCATE, VACUUM FULL or even manually VACUUM\n tables being anti-wraparound vacuumed, even though those manual actions will\n often resolve the issue much more quickly.\n\n3) Autovacuums triggered by tuple thresholds persistently getting cancelled\n also regularly causes outages, and they make it more likely that an\n eventual age-based vacuum will take forever.\n\n\nAspect 1) is addressed to a good degree by the proposed split of anti-wrap\ninto an age and anti-cancel triggers. And could be improved by reporting\nfailsafe autovacuums in pg_stat_activity.\n\nPerhaps 2) could be improved a bit by emitting a WARNING message when we\ndidn't cancel AV because it was anti-wraparound? But eventually I think we\nsomehow need to signal the \"intent\" of the lock drop down into ProcSleep() or\nwherever it'd be.\n\nI have two ideas around 3):\n\nFirst, we could introduce thresholds for the tuple thresholds, after which\nautovacuum isn't concealable anymore.\n\nSecond, we could track the number of cancellations since the last [auto]vacuum\nin pgstat, and only trigger the anti-cancel behaviour when autovacuum has been\ncancelled a number of times.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 18:09:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 6:09 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think the split is right. There's too much in 0001 - it's basically\n> introducing the terminology of 0002 already. Could you make it a much more\n> minimal change?\n\nOkay.\n\nI thought that you might say that. I really just wanted to show you\nhow small the code footprint was for the more controversial part.\n\n> IDK, it splits up anti-wraparound vacuums into different sub-kinds but doesn't\n> allow to distinguish most of them from a plain autovacuum.\n>\n> Seems pretty easy to display the trigger from 0001 in\n> autovac_report_activity()? You'd have to move the AutoVacType -> translated\n> string mapping into a separate function. That seems like a good idea anyway,\n> the current coding makes translators translate several largely identical\n> strings that just differ in one part.\n\nI'll look into it. It's on my TODO list now.\n\n> seems quite confusing / non-principled. What's the logic behind the auto\n> cancel threshold being 2 x freeze_max_age, except that when freeze_max_age is\n> 1 billion, the cutoff is set to 1 billion? That just makes no sense to me.\n\nWhy is the default for autovacuum_freeze_max_age 200 million? I think\nthat it's because 200 million is 10% of 2 billion. And so it is for\nthis cap. 1 billion is 50% of 2 billion. It's just numerology. It\ndoesn't make sense to me either.\n\nOf course it's not *completely* arbitrary. Obviously the final auto\ncancel threshold needs to be at least somewhat greater than\nfreeze_max_age for any of this to make any sense at all. And, we\nshould ideally have a comfortable amount of slack to work with, so\nthat things like moderate (not pathological) autovacuum worker\nstarvation isn't likely to defeat our attempts at avoiding the\nno-auto-cancel behavior for no good reason. Finally, once we get to a\ncertain table age it starts to seem like a bad idea to not be\nconservative about auto cancellations.\n\nI believe that we agree on most things when it comes to VACUUM, but\nI'm pretty sure that we still disagree about the value in setting\nautovacuum_freeze_max_age very high. I continue to believe that\nsetting it over a billion or so is just a bad idea. I'm mentioning\nthis only because it might give you some idea of where I'm coming from\n-- in general I believe that age-based settings often have only very\nweak relationships with what actually matters.\n\nIt might make sense to always give a small fixed amount of headroom\nwhen autovacuum_freeze_max_age is set to very high values. Maybe just\n5 million XIDs/MXIDs. That would probably be just as effective as\n(say) 500 million in almost all cases. But then you have to accept\nanother magic number.\n\n> I think this'd be a lot more readable if you introduced a separate variable\n> for the \"no-cancel\" threshold, rather than overloading freeze_max_age with\n> different meanings. And you should remove the confusing \"lowering\" of the\n> cutoff. Maybe something like\n>\n> no_cancel_age = freeze_max_age;\n> if (no_cancel_age < ANTIWRAPAROUND_MAX_AGE)\n> {\n> /* multiply by two, but make sure to not exceed ANTIWRAPAROUND_MAX_AGE */\n> no_cancel_age = Min((uint32)ANTIWRAPAROUND_MAX_AGE, (uint32)no_cancel_age * 2);\n> }\n\nI'm happy to do it that way, but let's decide what the algorithm\nitself should be first. Or let's explore it a bit more, at least.\n\n> > The only mechanism that the patch changes is related to \"prevent\n> > auto-cancellation\" behaviors -- which is now what the term\n> > \"antiwraparound\" refers to.\n>\n> Not sure that redefining what a long-standing name refers to is helpful. It\n> might be best to retire it and come up with new names.\n\nI generally try to avoid bike-shedding, and naming things is the\nultimate source of bike shedding. I dread having to update the docs\nfor this stuff, too. The docs in this area (particularing \"Routine\nVacuuming\") are such a mess already. But perhaps you're right.\n\n> 1) It regularly scares the crap out of users, even though it's normal. This\n> is further confounded by failsafe autovacuums, where a scared reaction is\n> appropriate, not being visible in pg_stat_activity.\n\nThe docs actually imply that when the system reaches the point of\nentering xidStopLimit mode, you might get data corruption. Of course\nthat's not true (the entire point of xidStopLimit is to avoid that),\nbut apparently we like to keep users on their toes.\n\n> I suspect that learning that \"vacuum to prevent wraparound\" isn't a\n> problem, contributes to people later ignoring \"must be vacuumed within ...\"\n> WARNINGS, which I've seen plenty times.\n\nThat point never occured to me, but it makes perfect intuitive sense\nthat users would behave that way. This phenomenon is sometimes called\nalarm fatigue, It can be quite dangerous to warn people about\nnon-issues \"out of an abundance of caution\".\n\n> 3) Autovacuums triggered by tuple thresholds persistently getting cancelled\n> also regularly causes outages, and they make it more likely that an\n> eventual age-based vacuum will take forever.\n\nReally? Outages? I imagine that you'd have to be constantly hammering\nthe table with DDL before it could happen. That's possible, but it\nseems relatively obvious that doing that is asking for trouble.\nWhereas the Manta postmortem (and every similar case that I've\npersonally seen) involved very nasty interactions that happened due to\nthe way components interacted with a workload that wasn't like that.\n\nRunning DDL from a cron job or from the application may not be a great\nidea, but it's also quite common.\n\n> Aspect 1) is addressed to a good degree by the proposed split of anti-wrap\n> into an age and anti-cancel triggers. And could be improved by reporting\n> failsafe autovacuums in pg_stat_activity.\n\nWhat you call aspect 2 (the issue with disastrous HW lock traffic jams\ninvolving TRUNCATE being run from a cron job, etc) is a big goal of\nmine for this patch series. You seem unsure of how effective my\napproach (or an equally simple approach based on table age heuristics)\nwill be. Is that the case?\n\n> Perhaps 2) could be improved a bit by emitting a WARNING message when we\n> didn't cancel AV because it was anti-wraparound? But eventually I think we\n> somehow need to signal the \"intent\" of the lock drop down into ProcSleep() or\n> wherever it'd be.\n\nThat's doable, but definitely seems like separate work.\n\n> I have two ideas around 3):\n>\n> First, we could introduce thresholds for the tuple thresholds, after which\n> autovacuum isn't concealable anymore.\n\nDo you think that's a good idea? I just don't trust those statistics,\nat all. As in, I think they're often complete garbage.\n\n> Second, we could track the number of cancellations since the last [auto]vacuum\n> in pgstat, and only trigger the anti-cancel behaviour when autovacuum has been\n> cancelled a number of times.\n\nIn theory that would be better than an approach along the lines I've\nproposed, because it is directly based on a less aggressive approach\nbeing tried a few times, and failing a few times. That part I like.\n\nHowever, I also don't like several things about this approach. First\nof all it relies on relatively complicated infrastructure, for\nsomething that can be critical. Second, it will be hard to test.\nThird, perhaps it would make sense to give the earlier/less aggressive\napproach (a table age av that is still autocancellable) quite a large\nnumber of attempts before giving up. If the table age isn't really\ngrowing too fast, why not continue to be patient, possibly for quite a\nlong time?\n\nPerhaps a hybrid strategy could be useful? Something like what I came\nup with already, *plus* a mechanism that gives up after (say) 1000\ncancellations, and escalates to no-auto-cancel, regardless of table\nage. It seems sensible to assume that a less aggressive approach is\njust hopeless relatively quickly (in wall clock time and logical XID\ntime) once we see sufficiently many cancellations against the same\ntable.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 13 Jan 2023 19:39:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 19:39:41 -0800, Peter Geoghegan wrote:\n> On Fri, Jan 13, 2023 at 6:09 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think the split is right. There's too much in 0001 - it's basically\n> > introducing the terminology of 0002 already. Could you make it a much more\n> > minimal change?\n>\n> Okay.\n>\n> I thought that you might say that. I really just wanted to show you\n> how small the code footprint was for the more controversial part.\n\nI don't mind you splitting this into three parts ;)\n\n\n\n> I believe that we agree on most things when it comes to VACUUM, but\n> I'm pretty sure that we still disagree about the value in setting\n> autovacuum_freeze_max_age very high. I continue to believe that\n> setting it over a billion or so is just a bad idea. I'm mentioning\n> this only because it might give you some idea of where I'm coming from\n> -- in general I believe that age-based settings often have only very\n> weak relationships with what actually matters.\n\nI think part of our difference around a high autovacuum_freeze_max_age are due\nto things you're trying to address here - if no-auto-cancel is a separate\nthreshold from autovacuum_freeze_max_age, it's less problematic to set\nautovacuum_freeze_max_age to something lower.\n\nBut yes, there's a remaining difference of opinion / experience.\n\n\n\n> It might make sense to always give a small fixed amount of headroom\n> when autovacuum_freeze_max_age is set to very high values. Maybe just\n> 5 million XIDs/MXIDs. That would probably be just as effective as\n> (say) 500 million in almost all cases. But then you have to accept\n> another magic number.\n\nI suspect that most systems with a high autovacuum_freeze_max_age use it\nbecause 200m leads to too frequent autovacuums - a few million won't do much\nfor those.\n\n\nHow about a float autovacuum_no_auto_cancel_age where positive values are\ntreated as absolute values, and negative values are a multiple of\nautovacuum_freeze_max_age? And where the \"computed\" age is capped at\nvacuum_failsafe_age? A \"failsafe\" autovacuum clearly shouldn't be cancelled.\n\nAnd maybe a default setting of -1.8 or so?\n\nIf a user chooses to set autovacuum_no_auto_cancel_age and vacuum_failsafe_age\nto 2.1 billion, oh well, that's really not our problem.\n\n\n\n> > 1) It regularly scares the crap out of users, even though it's normal. This\n> > is further confounded by failsafe autovacuums, where a scared reaction is\n> > appropriate, not being visible in pg_stat_activity.\n>\n> The docs actually imply that when the system reaches the point of\n> entering xidStopLimit mode, you might get data corruption. Of course\n> that's not true (the entire point of xidStopLimit is to avoid that),\n> but apparently we like to keep users on their toes.\n\nWell, historically there wasn't all that much protection. And I suspect there\nstill might be some dragons. We really need to get rid of the remaining places\nthat cache 32bit xids across transactions.\n\n\n\n> > 3) Autovacuums triggered by tuple thresholds persistently getting cancelled\n> > also regularly causes outages, and they make it more likely that an\n> > eventual age-based vacuum will take forever.\n>\n> Really? Outages? I imagine that you'd have to be constantly hammering\n> the table with DDL before it could happen. That's possible, but it\n> seems relatively obvious that doing that is asking for trouble.\n\nYes, due to queries slowing down due to the bloat, or due to running out of\nspace.\n\nI've seen a ~2TB table grow to ~20TB due to dead tuples, at which point the\nserver crash-restarted due to WAL ENOSPC. I think in that case there wasn't\neven DDL, they just needed to make the table readonly, for a brief moment, a\nfew times a day. The problem started once the indexes grew to be large enough\nthat the time for (re-)finding dead tuples + and an index scan phase got large\nenough that they were unlucky to be killed a few times in a row. After that\nautovac never got through the index scan phase. Not doing any \"new\" work, just\ncollecting the same dead tids over and over, scanning the indexes for those\ntids, never quite finishing.\n\n\nYou really don't need that frequent lock conflicts. It just needs to be more\nfrequent than what the index scans portion of vacuuming takes, which can be a\nlong time with large indexes.\n\n\n\n> > Aspect 1) is addressed to a good degree by the proposed split of anti-wrap\n> > into an age and anti-cancel triggers. And could be improved by reporting\n> > failsafe autovacuums in pg_stat_activity.\n>\n> What you call aspect 2 (the issue with disastrous HW lock traffic jams\n> involving TRUNCATE being run from a cron job, etc) is a big goal of\n> mine for this patch series. You seem unsure of how effective my\n> approach (or an equally simple approach based on table age heuristics)\n> will be. Is that the case?\n\nI am was primarily concerned about situations where an admin interactively was\ntrying to get rid of the table holding back the xid horizon, after some\nproblem caused a lot of work to pile up.\n\nIf you know postgres internals, it's easy. You know that autovac stops\nauto-cancelling after freeze_max_age and you know that the lock queue is\nordered. So you issue DROP TABLE / TRUNCATE / whatever and immediately\nafterwards cancel the autovac worker. But if you aren't, it'll take a while to\nfigure out that the DROP TABLE isn't progressing due to a lock conflict, at\nwhich point you'll cancel the statement (possibly having wrecked havoc with\nall other accesses). Then you figure out that you need to cancel\nautovacuum. After you try the DROP TABLE again - but the next worker has\ngotten to work on the table.\n\nBrr.\n\n\nThe bad cousin of this is when you can't even drop the table due to \"not\naccepting commands\". I don't know if you've seen people try to start postgres\nin single user mode, under pressure, for the first time. It ain't pretty.\n\n\n\n\n> > I have two ideas around 3):\n> >\n> > First, we could introduce thresholds for the tuple thresholds, after which\n> > autovacuum isn't concealable anymore.\n>\n> Do you think that's a good idea? I just don't trust those statistics,\n> at all. As in, I think they're often complete garbage.\n\nI have seen them be reasonably accurate in plenty busy systems. The most\ncommon problem I've seen is that the amount of assumed dead / inserted tuples\nis way too high, because the server crash-restarted at some point. We now\nhave most of the infrastructure for using a slightly older version of stats\nafter a crash, which'd make them less inaccurate.\n\nI haven't recently seen crazy over-estimates of the number of dead tuples, at\nleast if you count dead tids as tuples. It might be worth to split the dead\ntuple count into a dead tuples and dead items.\n\nAre you mostly seen over or under estimates?\n\n\n> > Second, we could track the number of cancellations since the last [auto]vacuum\n> > in pgstat, and only trigger the anti-cancel behaviour when autovacuum has been\n> > cancelled a number of times.\n>\n> In theory that would be better than an approach along the lines I've\n> proposed, because it is directly based on a less aggressive approach\n> being tried a few times, and failing a few times. That part I like.\n>\n> However, I also don't like several things about this approach. First\n> of all it relies on relatively complicated infrastructure, for\n> something that can be critical.\n\nI don't think it relies on much more machinery than we already rely on? The\ndead/inserted tuple estimates and come from pgstat already. The only really\nnew piece would be that the worker would need to do a\npgstat_report_autovac_failure() after getting cancelled, which doesn't seem\ntoo hard?\n\n\n> Second, it will be hard to test.\n\nIt doesn't seem too bad. A short naptime, cancelling autovac in a loop, and\nthe threshold should quickly be reached? We also could add a helper function\nto artificially increase the failure count.\n\n\n> Third, perhaps it would make sense to give the earlier/less aggressive\n> approach (a table age av that is still autocancellable) quite a large\n> number of attempts before giving up. If the table age isn't really\n> growing too fast, why not continue to be patient, possibly for quite a\n> long time?\n\nYea, that's true. But it's also easy to get to the point that you collect so\nmuch \"debt\" that it'll be hard to get out of. Particularly once the dead items\nspace doesn't fit all the dead tids, the subsequent parts of the table won't\nget processed by vacuum if it frequently is cancelled before the index scans\nend, leading to a huge amount fo work later.\n\n\n\n> Perhaps a hybrid strategy could be useful? Something like what I came\n> up with already, *plus* a mechanism that gives up after (say) 1000\n> cancellations, and escalates to no-auto-cancel, regardless of table\n> age. It seems sensible to assume that a less aggressive approach is\n> just hopeless relatively quickly (in wall clock time and logical XID\n> time) once we see sufficiently many cancellations against the same\n> table.\n\nMaybe. Making it explainable is presumably the hard part. We've historically\nfailed to make this area understandable, so maybe we don't need to try :)\n\n\nSomehow this made me think of a somewhat crazy, and largely unrelated, idea:\nWhy don't we use the the currently unused VM bit combination to indicate pages\nwith dead tids? We could have an [auto]vacuum mode where it scans just pages\nwith the dead tids bit set. Particularly when on-access heap pruning is doing\nmost of the work, that could be quite useful to more cheaply get rid of the\ndead tids. Obviously we'd also set them when vacuum decides / was told not to\ndo index cleanup. Yes, it'd obviously be less effective than lots of the\nthings we discussed in this area (needing to re-collect the dead tids on the\nindicated), but it'd have the advantage of not needing a lot of new\ninfrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Jan 2023 21:55:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 9:09 PM Andres Freund <andres@anarazel.de> wrote:\n> > > If I understand the patch correctly, we now have the following age based\n> > > thresholds for av:\n> > >\n> > > - force-enable autovacuum:\n> > > oldest_datfrozenxid + autovacuum_freeze_max_age < nextXid\n> > > - autovacuum based on age:\n> > > freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> > > tableagevac = relfrozenxid < recentXid - freeze_max_age\n> > > - prevent auto-cancellation:\n> > > freeze_max_age = Min(autovacuum_freeze_max_age, table_freeze_max_age)\n> > > prevent_auto_cancel_age = Min(freeze_max_age * 2, 1 billion)\n> > > prevent_auto_cancel = reflrozenxid < recentXid - prevent_auto_cancel_age\n> > >\n> > > Is that right?\n> >\n> > That summary looks accurate, but I'm a bit confused about why you're\n> > asking the question this way. I thought that it was obvious that the\n> > patch doesn't change most of these things.\n>\n> For me it was helpful to clearly list the triggers when thinking about the\n> issue. I found the diff hard to read and, as noted above, the logic for the\n> auto cancel threshold quite confusing, so ...\n\nI really dislike formulas like Min(freeze_max_age * 2, 1 billion).\nThat looks completely magical from a user perspective. Some users\naren't going to understand autovacuum behavior at all. Some will, and\nwill be able to compare age(relfrozenxid) against\nautovacuum_freeze_max_age. Very few people are going to think to\ncompare age(relfrozenxid) against some formula based on\nautovacuum_freeze_max_age. I guess if we document it, maybe they will.\n\nBut even then, what's the logic behind that formula? I am not entirely\nconvinced that we need to separate the force-a-vacuum threshold from\nthe don't-cancel threshold, but if we do separate them, what's the\npurpose of having the clearance between them increase as you increase\nautovacuum_freeze_max_age from 0 to 500 million, and thereafter\ndecrease until it reaches 0 at 1 billion? I can't explain the logic\nbehind that except by saying \"well, somebody came up with an arbitrary\nformula\".\n\nI do like the idea of driving the auto-cancel behavior off of the\nresults of previous attempts to vacuum the table. That could be done\nindependently of the XID age of the table. If we've failed to vacuum\nthe table, say, 10 times, because we kept auto-cancelling, it's\nprobably appropriate to force the issue. It doesn't really matter\nwhether the autovacuum triggered because of bloat or because of XID\nage. Letting either of those things get out of control is bad. What I\nthink happens fairly commonly right now is that the vacuums just keep\ngetting cancelled until the table's XID age gets too old, and then we\nfinally force the issue. But at that point a lot of harm has already\nbeen done. In a frequently updated table, waiting 300 million XIDs to\nstop cancelling the vacuum is basically condemning the user to have to\nrun VACUUM FULL. The table can easily be ten or a hundred times bigger\nthan it should be by that point.\n\nAnd that's a big reason why I am skeptical about the patch as\nproposed. It raises the threshold for auto-cancellation in cases where\nit's sometimes already far too high.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Jan 2023 11:25:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> I think part of our difference around a high autovacuum_freeze_max_age are due\n> to things you're trying to address here - if no-auto-cancel is a separate\n> threshold from autovacuum_freeze_max_age, it's less problematic to set\n> autovacuum_freeze_max_age to something lower.\n\nThat seems like a much smaller difference of opinion than I'd imagined\nit was before now.\n\n> But yes, there's a remaining difference of opinion / experience.\n\nPerhaps it's also a communication issue. I don't disagree with\npragmatic decisions that made sense given very particular limitations\nin Postgres, which this is starting to sound like now.\n\nWhen I express skepticism of very high autovacuum_freeze_max_age\nsettings, it's mostly just that I don't think that age(relfrozenxid)\nis at all informative in the way that something that triggers\nautovacuum ought to be. Worst of all, the older relfrozenxid gets, the\nless informative it becomes. I'm sure that it can be safe to use very\nhigh autovacuum_freeze_max_age values, but it seems like a case of\nusing a very complicated and unreliable thing to decide when to\nVACUUM.\n\n> > It might make sense to always give a small fixed amount of headroom\n> > when autovacuum_freeze_max_age is set to very high values. Maybe just\n> > 5 million XIDs/MXIDs. That would probably be just as effective as\n> > (say) 500 million in almost all cases. But then you have to accept\n> > another magic number.\n>\n> I suspect that most systems with a high autovacuum_freeze_max_age use it\n> because 200m leads to too frequent autovacuums - a few million won't do much\n> for those.\n\nMy point was mostly that it really doesn't cost us anything, and it\ncould easily help, so it might be worthwhile.\n\nI wonder if these users specifically get too many aggressive\nautovacuums with lower autovacuum_freeze_max_age settings? Maybe they\neven fail to get enough non-aggressive autovacuums? If that's what it\nis, then that makes perfect sense to me. However, I tend to think of\nthis as an argument against aggressive VACUUMs, not an argument in\nfavor of high autovacuum_freeze_max_age settings (as I said,\ncommunication is hard).\n\n> How about a float autovacuum_no_auto_cancel_age where positive values are\n> treated as absolute values, and negative values are a multiple of\n> autovacuum_freeze_max_age? And where the \"computed\" age is capped at\n> vacuum_failsafe_age? A \"failsafe\" autovacuum clearly shouldn't be cancelled.\n>\n> And maybe a default setting of -1.8 or so?\n\nI think that would work fine, as a GUC (with no reloption).\n\n> Well, historically there wasn't all that much protection. And I suspect there\n> still might be some dragons. We really need to get rid of the remaining places\n> that cache 32bit xids across transactions.\n\nMaybe, but our position is that it's supported, it's safe -- there\nwill be no data loss (or else it's a bug, and one that we'd take very\nseriously at that). Obviously it's a bad idea to allow this to happen,\nbut surely having the system enter xidStopLimit is sufficient\ndisincentive for users.\n\n> I've seen a ~2TB table grow to ~20TB due to dead tuples, at which point the\n> server crash-restarted due to WAL ENOSPC. I think in that case there wasn't\n> even DDL, they just needed to make the table readonly, for a brief moment, a\n> few times a day. The problem started once the indexes grew to be large enough\n> that the time for (re-)finding dead tuples + and an index scan phase got large\n> enough that they were unlucky to be killed a few times in a row. After that\n> autovac never got through the index scan phase. Not doing any \"new\" work, just\n> collecting the same dead tids over and over, scanning the indexes for those\n> tids, never quite finishing.\n\nThat makes sense to me, though I wonder if there would have been\nanother kind of outage (not the one you actually saw) had the\nautocancellation behavior somehow been disabled after the first few\ncancellations.\n\nThis sounds like a great argument in favor of suspend-and-resume as a\nway of handling autocancellation -- no useful work needs to be thrown\naway for AV to yield for a minute or two. One ambition of mine for the\nvisibility map snapshot infrastructure was to be able support\nsuspend-and-resume. It wouldn't be that hard for autovacuum to\nserialize everything, and use that to pick up right where an\nautovacuum worker left of at when it was autocancelled. Same\nOldestXmin starting point as before, same dead_items array, same\nnumber of scanned_pages (and pages left to scan).\n\n> > What you call aspect 2 (the issue with disastrous HW lock traffic jams\n> > involving TRUNCATE being run from a cron job, etc) is a big goal of\n> > mine for this patch series. You seem unsure of how effective my\n> > approach (or an equally simple approach based on table age heuristics)\n> > will be. Is that the case?\n>\n> I am was primarily concerned about situations where an admin interactively was\n> trying to get rid of the table holding back the xid horizon, after some\n> problem caused a lot of work to pile up.\n\nFair enough, but the outages that I'm mostly thinking about here\nweren't really like that. There wasn't anything holding back the\nhorizon, at any point. It was just that the autocancellation behavior\nwas disruptive in some critical way.\n\n> If you know postgres internals, it's easy. You know that autovac stops\n> auto-cancelling after freeze_max_age and you know that the lock queue is\n> ordered. So you issue DROP TABLE / TRUNCATE / whatever and immediately\n> afterwards cancel the autovac worker. But if you aren't, it'll take a while to\n> figure out that the DROP TABLE isn't progressing due to a lock conflict, at\n> which point you'll cancel the statement (possibly having wrecked havoc with\n> all other accesses). Then you figure out that you need to cancel\n> autovacuum. After you try the DROP TABLE again - but the next worker has\n> gotten to work on the table.\n\nYeah, that's pretty bad. Maybe DROP TABLE and TRUNCATE should be\nspecial cases? Maybe they should always be able to auto cancel an\nautovacuum?\n\n> > > I have two ideas around 3):\n> > >\n> > > First, we could introduce thresholds for the tuple thresholds, after which\n> > > autovacuum isn't concealable anymore.\n> >\n> > Do you think that's a good idea? I just don't trust those statistics,\n> > at all. As in, I think they're often complete garbage.\n>\n> I have seen them be reasonably accurate in plenty busy systems. The most\n> common problem I've seen is that the amount of assumed dead / inserted tuples\n> is way too high, because the server crash-restarted at some point. We now\n> have most of the infrastructure for using a slightly older version of stats\n> after a crash, which'd make them less inaccurate.\n\nDid you mean way too low?\n\n> I haven't recently seen crazy over-estimates of the number of dead tuples, at\n> least if you count dead tids as tuples. It might be worth to split the dead\n> tuple count into a dead tuples and dead items.\n>\n> Are you mostly seen over or under estimates?\n\nIt's not so much what I've seen. It's that the actual approach has\nlots of problems.\n\nReferring to my notes, here are what seem to me to be serious problems:\n\n* We are very naive about what a dead tuple even means, and we totally\nfail to account for the fact that only the subset of heap pages that\nare PageIsAllVisible() are interesting to VACUUM -- focusing on the\nwhole table just seems wrong.\n\nPer https://postgr.es/m/CAH2-Wz=MGFwJEpEjVzXwEjY5yx=UuNPzA6Bt4DSMasrGLUq9YA@mail.gmail.com\n\n* Stub LP_DEAD items in heap pages are extremely different to dead\nheap-only tuples in heap pages, which we ignore.\n\nPer https://postgr.es/m/CAH2-WznrZC-oHkB+QZQS65o+8_Jtj6RXadjh+8EBqjrD1f8FQQ@mail.gmail.com\n\n* Problem where insert-driven autovacuums\n(autovacuum_vacuum_insert_threshold/autovacuum_vacuum_insert_scale_factor\ntriggers AVs) become further spaced apart as a consequence of one\nVACUUM operation taking far longer than usual (generally because it's\nan aggressive VACUUM that follows several non-aggressive VACUUMs).\n\nPer https://postgr.es/m/CAH2-Wzn=bZ4wynYB0hBAeF4kGXGoqC=PZVKHeerBU-je9AQF=g@mail.gmail.com\n\nIt's quite possible to get approximately the desired outcome with an\nalgorithm that is completely wrong -- the way that we sometimes need\nautovacuum_freeze_max_age to deal with bloat is a great example of\nthat. Even then, there may still be serious problems that are well\nworth solving.\n\n> > However, I also don't like several things about this approach. First\n> > of all it relies on relatively complicated infrastructure, for\n> > something that can be critical.\n>\n> I don't think it relies on much more machinery than we already rely on? The\n> dead/inserted tuple estimates and come from pgstat already. The only really\n> new piece would be that the worker would need to do a\n> pgstat_report_autovac_failure() after getting cancelled, which doesn't seem\n> too hard?\n\nYeah, but it is a new dependency on stored state. Certainly doable,\nbut hard enough that it might be better to add that part later on.\n\n> Somehow this made me think of a somewhat crazy, and largely unrelated, idea:\n> Why don't we use the the currently unused VM bit combination to indicate pages\n> with dead tids? We could have an [auto]vacuum mode where it scans just pages\n> with the dead tids bit set. Particularly when on-access heap pruning is doing\n> most of the work, that could be quite useful to more cheaply get rid of the\n> dead tids. Obviously we'd also set them when vacuum decides / was told not to\n> do index cleanup. Yes, it'd obviously be less effective than lots of the\n> things we discussed in this area (needing to re-collect the dead tids on the\n> indicated), but it'd have the advantage of not needing a lot of new\n> infrastructure.\n\nI wonder if it would be possible to split up the work of VACUUM into\nmultiple phases that can be processed independently. The dead_items\narray could be serialized and stored in a temp file. That's not the\nsame as some of the more complicated stuff we talked about in the last\ncouple of years, such as a dedicated fork for Dead TIDs. It's more\nlike an extremely flexible version of the same basic design for\nVACUUM, with the ability to slow down and speed up based on a system\nlevel view of things (e.g., checkpointing information). And with index\nvacuuming happening on a highly deferred timeline in some cases.\nPossibly we could make each slice of work processed by any available\nautovacuum worker. Autovacuum workers could become \"headless\".\n\nYou would need some kind of state machine to make sure that critical\ndependencies were respected (e.g., always do the heap vacuuming step\nafter all indexes are vacuumed), but that possibly isn't that hard,\nand still gives you a lot.\n\nAs for this patch of mine: do you think that it would be acceptable to\npursue a version based on your autovacuum_no_auto_cancel_age design\nfor 16? Perhaps this can include something like\npgstat_report_autovac_failure(). It's not even the work of\nimplementing pgstat_report_autovac_failure() that creates risk that\nit'll miss the 16 feature freeze deadline. I'm more concerned that\nintroducing a more complicated design will lead to the patch being\nbikeshedded to death.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Jan 2023 13:58:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 8:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I really dislike formulas like Min(freeze_max_age * 2, 1 billion).\n> That looks completely magical from a user perspective. Some users\n> aren't going to understand autovacuum behavior at all. Some will, and\n> will be able to compare age(relfrozenxid) against\n> autovacuum_freeze_max_age. Very few people are going to think to\n> compare age(relfrozenxid) against some formula based on\n> autovacuum_freeze_max_age. I guess if we document it, maybe they will.\n\nWhat do you think of Andres' autovacuum_no_auto_cancel_age proposal?\n\nAs I've said several times already, I am by no means attached to the\ncurrent formula.\n\n> I do like the idea of driving the auto-cancel behavior off of the\n> results of previous attempts to vacuum the table. That could be done\n> independently of the XID age of the table.\n\nEven when the XID age of the table has already significantly surpassed\nautovacuum_freeze_max_age, say due to autovacuum worker starvation?\n\n> If we've failed to vacuum\n> the table, say, 10 times, because we kept auto-cancelling, it's\n> probably appropriate to force the issue.\n\nI suggested 1000 times upthread. 10 times seems very low, at least if\n\"number of times cancelled\" is the sole criterion, without any\nattention paid to relfrozenxid age or some other tiebreaker.\n\n> It doesn't really matter\n> whether the autovacuum triggered because of bloat or because of XID\n> age. Letting either of those things get out of control is bad.\n\nWhile inventing a new no-auto-cancel behavior that prevents bloat from\ngetting completely out of control may well have merit, I don't see why\nit needs to be attached to this other effort.\n\nI think that the vast majority of individual tables have autovacuums\ncancelled approximately never, and so my immediate concern is\nameliorating cases where not being able to auto-cancel once in a blue\nmoon causes an outage. Sure, the opposite problem also exists, and I\nthink that it would be really bad if it was made significantly worse\nas an unintended consequence of a patch that addressed just the first\nproblem. But that doesn't mean we have to solve both problems together\nat the same time.\n\n> But at that point a lot of harm has already\n> been done. In a frequently updated table, waiting 300 million XIDs to\n> stop cancelling the vacuum is basically condemning the user to have to\n> run VACUUM FULL. The table can easily be ten or a hundred times bigger\n> than it should be by that point.\n\nThe rate at which relfrozenxid ages is just about useless as a proxy\nfor how much wall clock time has passed with a given workload --\nworkloads are usually very bursty. It's much worse still as a proxy\nfor what has changed in the table; completely static tables have their\nrelfrozenxid age at exactly the same rate as the most frequently\nupdated table in the same database (the table that \"consumes the most\nXIDs\"). So while antiwraparound autovacuum no-auto-cancel behavior may\nindeed save the user from problems with serious bloat, it will happen\npretty much by mistake. Not that it doesn't happen all the same -- of\ncourse it does.\n\nThat factor (the mistake factor) doesn't mean I take the point any\nless seriously. What I don't take seriously is the idea that the\nprecise XID age was ever crucially important.\n\nMore generally, I just don't accept that this leaves with no room for\nsomething along the lines of my proposed, such as Andres'\nautovacuum_freeze_max_age concept. As I've said already, there will\nusually be a very asymmetric quality to the problem in cases like the\nJoyent outage. Even a modest amount of additional XID-space-headroom\nwill very likely be all that will be needed at the critical juncture.\nIt may not be perfect, but it still has every potential to make things\nsafer for some users, without making things any less safe for other\nusers.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Jan 2023 20:11:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 11:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Jan 16, 2023 at 8:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I really dislike formulas like Min(freeze_max_age * 2, 1 billion).\n> > That looks completely magical from a user perspective. Some users\n> > aren't going to understand autovacuum behavior at all. Some will, and\n> > will be able to compare age(relfrozenxid) against\n> > autovacuum_freeze_max_age. Very few people are going to think to\n> > compare age(relfrozenxid) against some formula based on\n> > autovacuum_freeze_max_age. I guess if we document it, maybe they will.\n>\n> What do you think of Andres' autovacuum_no_auto_cancel_age proposal?\n\nI like it better than your proposal. I don't think it's a fundamental\nimprovement and I would rather see a fundamental improvement, but I\ncan see it being better than nothing.\n\n> > I do like the idea of driving the auto-cancel behavior off of the\n> > results of previous attempts to vacuum the table. That could be done\n> > independently of the XID age of the table.\n>\n> Even when the XID age of the table has already significantly surpassed\n> autovacuum_freeze_max_age, say due to autovacuum worker starvation?\n>\n> > If we've failed to vacuum\n> > the table, say, 10 times, because we kept auto-cancelling, it's\n> > probably appropriate to force the issue.\n>\n> I suggested 1000 times upthread. 10 times seems very low, at least if\n> \"number of times cancelled\" is the sole criterion, without any\n> attention paid to relfrozenxid age or some other tiebreaker.\n\nHmm, I think that a threshold of 1000 is far too high to do much good.\nBy the time we've tried to vacuum a table 1000 times and failed every\ntime, I anticipate that the situation will be pretty dire, regardless\nof why we thought the table needed to be vacuumed in the first place.\nIn the best case, with autovacum_naptime=1minute, failing 1000 times\nmeans that we've delayed vacuuming the table for at least 16 hours.\nThat's assuming that there's a worker available to retry every minute\nand that we fail quickly. If it's a 2 hour vacuum operation and we\ntypically fail about halfway through, it could take us over a month to\nhit 1000 failures. There are many tables out there that get enough\ninserts, updates, and deletes that a 16-hour delay will result in\nirreversible bloat, never mind a 41-day delay. After even a few days,\nwraparound could become critical even if bloat isn't.\n\nI'm not sure why a threshold of 10 would be too low. It seems to me\nthat if we fail ten times in a row to vacuum a table and fail for the\nsame reason every time, we're probably going to keep failing for that\nreason. If that is true, we will be better off if we force the issue\nsooner rather than later. There's no value in letting the table bloat\nout the wazoo and the cluster approach a wraparound shutdown before we\ninsist. Consider a more mundane example. If I try to start my car or\nmy dishwasher or my computer or my toaster oven ten times and it fails\nten times in a row, and the failure mode appears to be the same each\ntime, I am not going to sit there and try 990 more times hoping things\nget better, because that seems very unlikely to help. Honestly,\ndepending on the situation, I might not even get to ten times before I\nswitch to doing some form of troubleshooting and/or calling someone\nwho could repair the device.\n\nIn fact I think there's a decent argument that a threshold of ten is\npossibly too high here, too. If you wait until the tenth try before\nyou try not auto-cancelling, then a table with a workload that makes\nauto-cancelling 100% probable will get vacuumed 10% as often as it\nwould otherwise. I think there are cases where that would be OK, but\nprobably on the whole it's not going to go very well. The only problem\nI see with lowering the threshold below ~10 is that the signal starts\nto get weak. If something fails for the same reason ten times in a row\nyou can be pretty sure it's a chronic problem. If you made the\nthereshold say three you'd probably start making bad decisions\nsometimes -- you'd think that you had a chronic problem when really\nyou just got a bit unlucky.\n\nTo get back to the earlier question above, I think that if the\nretries-before-not-auto-cancelling threshold were low enough to be\neffective, you wouldn't necessarily need to consider XID age as a\nsecond reason for not auto-cancelling. You would want to force the\nbehavior anyway when you hit emergency mode, because that should force\nall the mitigations we have, but I don't know that you need to do\nanything before that.\n\n> > It doesn't really matter\n> > whether the autovacuum triggered because of bloat or because of XID\n> > age. Letting either of those things get out of control is bad.\n>\n> While inventing a new no-auto-cancel behavior that prevents bloat from\n> getting completely out of control may well have merit, I don't see why\n> it needs to be attached to this other effort.\n\nIt doesn't, but I think it would be a lot more beneficial than just\nadding a new GUC. A lot of the fundamental stupidity of autovacuum\ncomes from its inability to consider the context. I've had various\nideas over the years about how to fix that, but this is far simpler\nthan some things I've thought about and I think it would help a lot of\npeople.\n\n> That factor (the mistake factor) doesn't mean I take the point any\n> less seriously. What I don't take seriously is the idea that the\n> precise XID age was ever crucially important.\n\nI agree. That's why I think driving this off of number of previous\nfailures would be better than driving it off of an XID age.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Jan 2023 10:26:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-16 13:58:21 -0800, Peter Geoghegan wrote:\n> On Fri, Jan 13, 2023 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> When I express skepticism of very high autovacuum_freeze_max_age\n> settings, it's mostly just that I don't think that age(relfrozenxid)\n> is at all informative in the way that something that triggers\n> autovacuum ought to be. Worst of all, the older relfrozenxid gets, the\n> less informative it becomes. I'm sure that it can be safe to use very\n> high autovacuum_freeze_max_age values, but it seems like a case of\n> using a very complicated and unreliable thing to decide when to\n> VACUUM.\n\nBoth you and Robert said this, and I have seen it be true, but typically not\nfor large high-throughput OLTP databases, where I found increasing\nrelfrozenxid to be important. Sure, there's probably some up/down through the\nday / week, but it's likely to be pretty predictable.\n\nI think the problem is that an old relfrozenxid doesn't tell you how much\noutstanding work there is. Perhaps that's what both of you meant...\n\n\nI think that's not the fault of relfrozenxid as a trigger, but that we simply\ndon't keep enough other stats. We should imo at least keep track of:\n\nIn pg_class:\n\n- The number of all frozen pages, like we do for all-visible\n\n That'd give us a decent upper bound for the amount of work we need to do to\n increase relfrozenxid. It's important that this is crash safe (thus no\n pg_stats), and it only needs to be written when we'd likely make other\n changes to the pg_class row anyway.\n\n\nIn pgstats:\n\n- The number of dead items, incremented both by the heap scan and\n opportunistic pruning\n\n This would let us estimate how urgently we need to clean up indexes.\n\n- The xid/mxid horizons during the last vacuum\n\n- The number of pages with tuples that couldn't removed due to the horizon\n during the last vacuum\n\n Together with the horizon, this would let us avoid repeated vacuums that\n won't help. Tracking the number of pages instead of tuples allows a lot\n better cost/benefit estimation of another vacuum.\n\n- The number of pages with tuples that couldn't be frozen\n\n Similar to the dead tuple one, except that it'd help avoid repeated vacuums\n to increase relfrozenxid, when it won't be able to help.\n\n\n> > I've seen a ~2TB table grow to ~20TB due to dead tuples, at which point the\n> > server crash-restarted due to WAL ENOSPC. I think in that case there wasn't\n> > even DDL, they just needed to make the table readonly, for a brief moment, a\n> > few times a day. The problem started once the indexes grew to be large enough\n> > that the time for (re-)finding dead tuples + and an index scan phase got large\n> > enough that they were unlucky to be killed a few times in a row. After that\n> > autovac never got through the index scan phase. Not doing any \"new\" work, just\n> > collecting the same dead tids over and over, scanning the indexes for those\n> > tids, never quite finishing.\n>\n> That makes sense to me, though I wonder if there would have been\n> another kind of outage (not the one you actually saw) had the\n> autocancellation behavior somehow been disabled after the first few\n> cancellations.\n\nI suspect so, but it's hard to know.\n\n\n> This sounds like a great argument in favor of suspend-and-resume as a\n> way of handling autocancellation -- no useful work needs to be thrown\n> away for AV to yield for a minute or two. One ambition of mine for the\n> visibility map snapshot infrastructure was to be able support\n> suspend-and-resume. It wouldn't be that hard for autovacuum to\n> serialize everything, and use that to pick up right where an\n> autovacuum worker left of at when it was autocancelled. Same\n> OldestXmin starting point as before, same dead_items array, same\n> number of scanned_pages (and pages left to scan).\n\nHm, that seems a lot of work. Without having held a lock you don't even know\nwhether your old dead items still apply. Of course it'd improve the situation\nsubstantially, if we could get it.\n\n\n\n> > If you know postgres internals, it's easy. You know that autovac stops\n> > auto-cancelling after freeze_max_age and you know that the lock queue is\n> > ordered. So you issue DROP TABLE / TRUNCATE / whatever and immediately\n> > afterwards cancel the autovac worker. But if you aren't, it'll take a while to\n> > figure out that the DROP TABLE isn't progressing due to a lock conflict, at\n> > which point you'll cancel the statement (possibly having wrecked havoc with\n> > all other accesses). Then you figure out that you need to cancel\n> > autovacuum. After you try the DROP TABLE again - but the next worker has\n> > gotten to work on the table.\n>\n> Yeah, that's pretty bad. Maybe DROP TABLE and TRUNCATE should be\n> special cases? Maybe they should always be able to auto cancel an\n> autovacuum?\n\nYea, I think so. It's not obvious how to best pass down that knowledge into\nProcSleep(). It'd have to be in the LOCALLOCK, I think. Looks like the best\nway would be to change LockAcquireExtended() to get a flags argument instead\nof reportMemoryError, and then we could add LOCK_ACQUIRE_INTENT_DROP &\nLOCK_ACQUIRE_INTENT_TRUNCATE or such. Then the same for\nRangeVarGetRelidExtended(). It already \"customizes\" how to lock based on RVR*\nflags.\n\n\n> > > > I have two ideas around 3):\n> > > >\n> > > > First, we could introduce thresholds for the tuple thresholds, after which\n> > > > autovacuum isn't concealable anymore.\n> > >\n> > > Do you think that's a good idea? I just don't trust those statistics,\n> > > at all. As in, I think they're often complete garbage.\n> >\n> > I have seen them be reasonably accurate in plenty busy systems. The most\n> > common problem I've seen is that the amount of assumed dead / inserted tuples\n> > is way too high, because the server crash-restarted at some point. We now\n> > have most of the infrastructure for using a slightly older version of stats\n> > after a crash, which'd make them less inaccurate.\n>\n> Did you mean way too low?\n\nErr, yes.\n\n\n> > I haven't recently seen crazy over-estimates of the number of dead tuples, at\n> > least if you count dead tids as tuples. It might be worth to split the dead\n> > tuple count into a dead tuples and dead items.\n> >\n> > Are you mostly seen over or under estimates?\n>\n> It's not so much what I've seen. It's that the actual approach has\n> lots of problems.\n>\n> Referring to my notes, here are what seem to me to be serious problems:\n\nISTM that some of what you write below would be addressed, at least partially,\nby the stats I proposed above. Particularly keeping some \"page granularity\"\ninstead of \"tuple granularity\" stats seems helpful.\n\nIt'd be great if we could update the page-granularity stats in\nheap_{insert,multi_insert,update,delete,heapgetpage}. But without page level\nflags like \"has any dead tuples\" that's probably too expensive.\n\n\n> * We are very naive about what a dead tuple even means, and we totally\n> fail to account for the fact that only the subset of heap pages that\n> are PageIsAllVisible() are interesting to VACUUM -- focusing on the\n> whole table just seems wrong.\n>\n> Per https://postgr.es/m/CAH2-Wz=MGFwJEpEjVzXwEjY5yx=UuNPzA6Bt4DSMasrGLUq9YA@mail.gmail.com\n\n\n> * Stub LP_DEAD items in heap pages are extremely different to dead\n> heap-only tuples in heap pages, which we ignore.\n>\n> Per https://postgr.es/m/CAH2-WznrZC-oHkB+QZQS65o+8_Jtj6RXadjh+8EBqjrD1f8FQQ@mail.gmail.com\n\n> * Problem where insert-driven autovacuums\n> (autovacuum_vacuum_insert_threshold/autovacuum_vacuum_insert_scale_factor\n> triggers AVs) become further spaced apart as a consequence of one\n> VACUUM operation taking far longer than usual (generally because it's\n> an aggressive VACUUM that follows several non-aggressive VACUUMs).\n\n> Per https://postgr.es/m/CAH2-Wzn=bZ4wynYB0hBAeF4kGXGoqC=PZVKHeerBU-je9AQF=g@mail.gmail.com\n\nYea, that's not great. But seems fairly addressable.\n\n\n> It's quite possible to get approximately the desired outcome with an\n> algorithm that is completely wrong -- the way that we sometimes need\n> autovacuum_freeze_max_age to deal with bloat is a great example of\n> that.\n\nYea. I think this is part of I like my idea about tracking more observations\nmade by the last vacuum - they're quite easy to get right, and they\nself-correct, rather than potentially ending up causing ever-wronger stats.\n\n\n> Even then, there may still be serious problems that are well\n> worth solving.\n\nRight. I think it's fundamental that we get a lot better estimates about the\namount of work needed. Without that we have no chance of finishing autovacuums\nbefore problems become too big.\n\n\n\n> > Somehow this made me think of a somewhat crazy, and largely unrelated, idea:\n> > Why don't we use the the currently unused VM bit combination to indicate pages\n> > with dead tids? We could have an [auto]vacuum mode where it scans just pages\n> > with the dead tids bit set. Particularly when on-access heap pruning is doing\n> > most of the work, that could be quite useful to more cheaply get rid of the\n> > dead tids. Obviously we'd also set them when vacuum decides / was told not to\n> > do index cleanup. Yes, it'd obviously be less effective than lots of the\n> > things we discussed in this area (needing to re-collect the dead tids on the\n> > indicated), but it'd have the advantage of not needing a lot of new\n> > infrastructure.\n>\n> I wonder if it would be possible to split up the work of VACUUM into\n> multiple phases that can be processed independently. The dead_items\n> array could be serialized and stored in a temp file. That's not the\n> same as some of the more complicated stuff we talked about in the last\n> couple of years, such as a dedicated fork for Dead TIDs. It's more\n> like an extremely flexible version of the same basic design for\n> VACUUM, with the ability to slow down and speed up based on a system\n> level view of things (e.g., checkpointing information). And with index\n> vacuuming happening on a highly deferred timeline in some cases.\n> Possibly we could make each slice of work processed by any available\n> autovacuum worker. Autovacuum workers could become \"headless\".\n\nI don't know. I think the more basic idea I describe has significant\nadvantages - most importantly being able to target autovacuum on the work that\non-access pruning couldn't deal with. As you know it's very common that most\nrow versions are HOT and that the remaining dead tuples also get removed by\non-access pruning. Which often leads to running out of usable items, but\nautovacuum won't be triggered because there's not all that much garbage\noverall. Without the ability for autovacuum to target such pages\naggressively, I don't think we're going to improve such common workloads a\nwhole lot. And serialized vacuum state won't help, because that still requires\nvacuum to scan all the !all-visible pages to discover them. Most of which\nwon't contain dead tuples in a lot of workloads.\n\n\n\n> As for this patch of mine: do you think that it would be acceptable to\n> pursue a version based on your autovacuum_no_auto_cancel_age design\n> for 16? Perhaps this can include something like\n> pgstat_report_autovac_failure(). It's not even the work of\n> implementing pgstat_report_autovac_failure() that creates risk that\n> it'll miss the 16 feature freeze deadline. I'm more concerned that\n> introducing a more complicated design will lead to the patch being\n> bikeshedded to death.\n\nI don't feel like I have a good handle on what could work for 16 and what\ncouldn't. Personally I think something like autovacuum_no_auto_cancel_age\nwould be an improvement, but I also don't quite feel satisfied with it.\n\nTracking the number of autovac failures seems uncontroverial and quite\nbeneficial, even if the rest doesn't make it in. It'd at least let users\nmonitor for tables where autovac is likely to swoop in in anti-wraparound\nmode.\n\nPerhaps its worth to separately track the number of times a backend would have\nliked to cancel autovac, but couldn't due to anti-wrap? If changes to the\nno-auto-cancel behaviour don't make it in, it'd at least allow us to collect\nmore data about the prevalence of the problem and in what situations it\noccurs? Even just adding some logging for that case seems like it'd be an\nimprovement.\n\n\nI think with a bit of polish \"Add autovacuum trigger instrumentation.\" ought\nto be quickly mergeable.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 10:02:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 10:26:52 -0500, Robert Haas wrote:\n> On Mon, Jan 16, 2023 at 11:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Mon, Jan 16, 2023 at 8:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I really dislike formulas like Min(freeze_max_age * 2, 1 billion).\n> > > That looks completely magical from a user perspective. Some users\n> > > aren't going to understand autovacuum behavior at all. Some will, and\n> > > will be able to compare age(relfrozenxid) against\n> > > autovacuum_freeze_max_age. Very few people are going to think to\n> > > compare age(relfrozenxid) against some formula based on\n> > > autovacuum_freeze_max_age. I guess if we document it, maybe they will.\n> >\n> > What do you think of Andres' autovacuum_no_auto_cancel_age proposal?\n> \n> I like it better than your proposal. I don't think it's a fundamental\n> improvement and I would rather see a fundamental improvement, but I\n> can see it being better than nothing.\n\nThat's similar to my feelings about it.\n\nI do think it'll be operationally nice to have at least some window where an\nautovacuum is triggered due to age and where it won't prevent cancels. In many\nsituations it'll likely suffice if that window is autovacuum_naptime *\nxids_per_sec large, but of course that's easily enough exceeded.\n\n\n> > > I do like the idea of driving the auto-cancel behavior off of the\n> > > results of previous attempts to vacuum the table. That could be done\n> > > independently of the XID age of the table.\n> >\n> > Even when the XID age of the table has already significantly surpassed\n> > autovacuum_freeze_max_age, say due to autovacuum worker starvation?\n> >\n> > > If we've failed to vacuum\n> > > the table, say, 10 times, because we kept auto-cancelling, it's\n> > > probably appropriate to force the issue.\n> >\n> > I suggested 1000 times upthread. 10 times seems very low, at least if\n> > \"number of times cancelled\" is the sole criterion, without any\n> > attention paid to relfrozenxid age or some other tiebreaker.\n> \n> Hmm, I think that a threshold of 1000 is far too high to do much good.\n\nAgreed.\n\n\n> By the time we've tried to vacuum a table 1000 times and failed every\n> time, I anticipate that the situation will be pretty dire, regardless\n> of why we thought the table needed to be vacuumed in the first place.\n\nAgreed.\n\n\n> In the best case, with autovacum_naptime=1minute, failing 1000 times\n> means that we've delayed vacuuming the table for at least 16 hours.\n\nPerhaps it'd make sense for an auto-cancelled worker to signal the launcher to\ndo a cycle of vacuuming? Or even to just try to vacuum the table again\nimmediately? After all, we know that the table is going to be on the schedule\nof the next worker immediately. Of course we shouldn't retry indefinitely, but\n...\n\n\n> In fact I think there's a decent argument that a threshold of ten is\n> possibly too high here, too. If you wait until the tenth try before\n> you try not auto-cancelling, then a table with a workload that makes\n> auto-cancelling 100% probable will get vacuumed 10% as often as it\n> would otherwise. I think there are cases where that would be OK, but\n> probably on the whole it's not going to go very well.\n\nThat's already kind of the case - we'll only block auto-cancelling when\nexceeding autovacuum_freeze_max_age, all the other autovacuums will be\ncancelable.\n\n\n> The only problem I see with lowering the threshold below ~10 is that the\n> signal starts to get weak. If something fails for the same reason ten times\n> in a row you can be pretty sure it's a chronic problem. If you made the\n> thereshold say three you'd probably start making bad decisions sometimes --\n> you'd think that you had a chronic problem when really you just got a bit\n> unlucky.\n\nYea. Schema migrations in prod databases typically have to run in\nsingle-statement or very small transactions, for obvious reasons. Needing to\nlock the same table exclusively a few times during a schema migration is\npretty normal, particularly when foreign keys are involved. Getting blocked by\nautovacuum in the middle of a schema migration is NASTY.\n\nThis is why I'm a bit worried that 10 might be too low... It's not absurd for\na schema migration to create 10 new tables referencing an existing table in\nneed of vacuuming.\n\nPerhaps we should track when the first failure was, and take that into\naccount? Clearly having all 10 autovacuums on the same table cancelled is\ndifferent when those 10 cancellations happened in the last 10 *\nautovacuum_naptime minutes, than when the last successful autovacuum was hours\nago.\n\n\n> To get back to the earlier question above, I think that if the\n> retries-before-not-auto-cancelling threshold were low enough to be\n> effective, you wouldn't necessarily need to consider XID age as a\n> second reason for not auto-cancelling. You would want to force the\n> behavior anyway when you hit emergency mode, because that should force\n> all the mitigations we have, but I don't know that you need to do\n> anything before that.\n\nHm, without further restrictions, that has me worried. It's not crazy to have\na LOCK TABLE on a small-ish table be part of your workload - I've certainly\nseen it plenty of times. Suddenly blocking on that for a few minutes, just\nbecause a bit of bloat has collected, seems likely to cause havoc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 10:33:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 1:02 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-16 13:58:21 -0800, Peter Geoghegan wrote:\n> > On Fri, Jan 13, 2023 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > When I express skepticism of very high autovacuum_freeze_max_age\n> > settings, it's mostly just that I don't think that age(relfrozenxid)\n> > is at all informative in the way that something that triggers\n> > autovacuum ought to be. Worst of all, the older relfrozenxid gets, the\n> > less informative it becomes. I'm sure that it can be safe to use very\n> > high autovacuum_freeze_max_age values, but it seems like a case of\n> > using a very complicated and unreliable thing to decide when to\n> > VACUUM.\n>\n> Both you and Robert said this, and I have seen it be true, but typically not\n> for large high-throughput OLTP databases, where I found increasing\n> relfrozenxid to be important. Sure, there's probably some up/down through the\n> day / week, but it's likely to be pretty predictable.\n>\n> I think the problem is that an old relfrozenxid doesn't tell you how much\n> outstanding work there is. Perhaps that's what both of you meant...\n\nI think so, at least in my case. The reason why the default\nautovacuum_freeze_max_age is 300m is not because going over 300m is a\nproblem, but because we don't know how long it's going to take to run\nall of the vacuums, and we're still going to be consuming XIDs in the\nmeantime, and we have no idea how fast we're consuming XIDs either.\nFor all we know it might take days, and we might be burning through\nXIDs really quickly, so we might need a ton of headroom.\n\nThat's not to say that raising the setting might not be a sensible\nthing to do in a context where you know what the workload is. If you\nknow that the vacuums will completely quickly OR that the XID burn\nrate is low, you can raise it drastically and be fine. We just can't\nassume that will be true everywhere.\n\n> I think that's not the fault of relfrozenxid as a trigger, but that we simply\n> don't keep enough other stats. We should imo at least keep track of:\n>\n> In pg_class:\n>\n> - The number of all frozen pages, like we do for all-visible\n>\n> That'd give us a decent upper bound for the amount of work we need to do to\n> increase relfrozenxid. It's important that this is crash safe (thus no\n> pg_stats), and it only needs to be written when we'd likely make other\n> changes to the pg_class row anyway.\n\nI'm not sure how useful this is because a lot of the work is from\nscanning the indexes.\n\n> In pgstats:\n>\n> - The number of dead items, incremented both by the heap scan and\n> opportunistic pruning\n>\n> This would let us estimate how urgently we need to clean up indexes.\n\nI don't think this is true because btree indexes are self-cleaning in\nsome scenarios and not in others.\n\n> - The xid/mxid horizons during the last vacuum\n>\n> - The number of pages with tuples that couldn't removed due to the horizon\n> during the last vacuum\n>\n> - The number of pages with tuples that couldn't be frozen\n\nNot bad to know, but if the horizon we could use advances by 1, we\ncan't tell whether that allows pruning nothing additional or another\nbillion tuples.\n\nI'm not trying to take the position that XID age is a totally useless\nmetric. I don't think it is. If XID age is high, you know you have a\nproblem, and the higher it is, the more urgent that problem is.\nFurthemore, if XID is low, you know that you don't have that\nparticular problem. You might have some other one, but that's OK: this\none metric doesn't have to answer every question to be useful.\nHowever, where XID age really falls down as a metric is that it\ndoesn't tell you what it's going to take to solve the problem. The\nanswer, at ten thousand feet, is always vacuum. But how long will that\nvacuum run? We don't know. Do we need the XID horizon to advance\nfirst, and if so, how far? We don't know. Do we need auto-cancellation\nto be disabled? We don't know. That's where we get into a lot of\ntrouble here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Jan 2023 14:57:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> Both you and Robert said this, and I have seen it be true, but typically not\n> for large high-throughput OLTP databases, where I found increasing\n> relfrozenxid to be important. Sure, there's probably some up/down through the\n> day / week, but it's likely to be pretty predictable.\n>\n> I think the problem is that an old relfrozenxid doesn't tell you how much\n> outstanding work there is. Perhaps that's what both of you meant...\n\nThat's what I meant, yes.\n\n> I think that's not the fault of relfrozenxid as a trigger, but that we simply\n> don't keep enough other stats. We should imo at least keep track of:\n\nIf you assume that there is chronic undercounting of dead tuples\n(which I think is very common), then of course anything that triggers\nvacuuming is going to help with that problem -- it might be totally\ninadequate, but still make the critical difference by not allowing the\nsystem to become completely destabilized. I absolutely accept that\nusers that are relying on that exist, and that those users ought to\nnot have things get even worse -- I'm pragmatic. But overall, what we\nshould be doing is fixing the real problem, which is that the dead\ntuples accounting is deeply flawed. Actually, it's not just that the\nstatistics are flat out wrong; the whole model is flat-out wrong.\n\nThe assumptions that work well for optimizer statistics quite simply\ndo not apply here. Random sampling for this is just wrong, because\nwe're not dealing with something that follows a distribution that can\nbe characterized with a sufficiently large sample. With optimizer\nstatistics, the entire contents of the table is itself a sample taken\nfrom the wider world -- so even very stale statistics can work quite\nwell (assuming that the schema is well normalized). Whereas the\nautovacuum dead tuples stuff is characterized by constant change. I\nmean of course it is -- that's the whole point! The central limit\ntheorem obviously just doesn't work for something like this -- we\ncannot generalize from a sample, at all.\n\nI strongly suspect that it still wouldn't be a good model even if the\ninformation was magically always correct. It might actually be worse\nin some ways! Most of my arguments against the model are not arguments\nagainst the accuracy of the statistics as such. They're actually\narguments against the fundamental relevance of the information itself,\nto the actual problem at hand. We are not interested in information\nfor its own sake; we're interested in making better decisions about\nautovacuum scheduling. Those may only have a very loose relationship.\n\nHow many dead heap-only tuples are equivalent to one LP_DEAD item?\nWhat about page-level concentrations, and the implication for\nline-pointer bloat? I don't have a good answer to any of these\nquestions myself. And I have my doubts that there are *any* good\nanswers. Even these questions are the wrong questions (they're just\nless wrong). Fundamentally, we're deciding when the next autovacuum\nshould run against each table. Presumably it's going to have to happen\nsome time, and when it does happen it happens to the table as a whole.\nAnd with a larger table it probably doesn't matter if it happens +/- a\nfew hours from some theoretical optimal time. Doesn't it boil down to\nthat?\n\nIf we taught the system to do the autovacuum work early because it's a\nrelatively good time for it from a system level point of view (e.g.\nit's going to be less disruptive right now), that would be useful and\neasy to justify on its own terms. But it would also tend to make the\nsystem much less vulnerable to undercounting dead tuples, since in\npractice there'd be a decent chance of getting to them early enough\nthat it at least wasn't extremely painful any one time. It's much\neasier to understand that the system is quiescent than it is to\nunderstand bloat.\n\nBTW, I think that the insert-driven autovacuum stuff added to 13 has\nmade the situation with bloat significantly better. Of course it\nwasn't really designed to do that at all, but it still has, kind of by\naccident, in roughly the same way that antiwraparound autovacuums help\nwith bloat by accident. So maybe we should embrace \"happy accidents\"\nlike that a bit more. It doesn't necessarily matter if we do the right\nthing for a reason that turns out to have not been the best reason.\nI'm certainly not opposed to it, despite my complaints about relying\non age(relfrozenxid).\n\n> In pgstats:\n> (Various stats)\n\nOverall, what I like about your ideas here is the emphasis on bounding\nthe worst case, and the emphasis on the picture at the page level over\nthe tuple level.\n\nI'd like to use the visibility map more for stuff here, too. It is\ntotally accurate about all-visible/all-frozen pages, so many of my\ncomplaints about statistics don't really apply. Or need not apply, at\nleast. If 95% of a table's pages are all-frozen in the VM, then of\ncourse it's pretty unlikely to be the right time to VACUUM the table\nif it's to clean up bloat -- this is just about the most reliable\ninformation we have access to.\n\nI think that the only way that more stats can help is by allowing us\nto avoid doing completely the wrong thing more often. Just avoiding\ndisaster is a great goal for us here.\n\n> > This sounds like a great argument in favor of suspend-and-resume as a\n> > way of handling autocancellation -- no useful work needs to be thrown\n> > away for AV to yield for a minute or two.\n\n> Hm, that seems a lot of work. Without having held a lock you don't even know\n> whether your old dead items still apply. Of course it'd improve the situation\n> substantially, if we could get it.\n\nI don't think it's all that much work, once the visibility map\nsnapshot infrastructure is there.\n\nWhy wouldn't your old dead items still apply? The TIDs must always\nreference LP_DEAD stubs. Those can only be set LP_UNUSED by VACUUM,\nand presumably VACUUM can only run in a way that either resumes the\nsuspended VACUUM session, or discards it altogether. So they're not\ngoing to be invalidated during the period that a VACUUM is suspended,\nin any way. Even if CREATE INDEX runs against the table during a\nsuspending VACUUM, we know that the existing LP_DEAD dead_items won't\nhave been indexed, so they'll be safe to mark LP_UNUSED in any case.\n\nWhat am I leaving out? I can't think of anything. The only minor\ncaveat is that we'd probably have to discard the progress from any\nindividual ambulkdelete() call that happened to be running at the time\nthat VACUUM was interrupted.\n\n> > Yeah, that's pretty bad. Maybe DROP TABLE and TRUNCATE should be\n> > special cases? Maybe they should always be able to auto cancel an\n> > autovacuum?\n>\n> Yea, I think so. It's not obvious how to best pass down that knowledge into\n> ProcSleep(). It'd have to be in the LOCALLOCK, I think. Looks like the best\n> way would be to change LockAcquireExtended() to get a flags argument instead\n> of reportMemoryError, and then we could add LOCK_ACQUIRE_INTENT_DROP &\n> LOCK_ACQUIRE_INTENT_TRUNCATE or such. Then the same for\n> RangeVarGetRelidExtended(). It already \"customizes\" how to lock based on RVR*\n> flags.\n\nIt would be tricky, but still relatively straightforward compared to\nother things. It is often a TRUNCATE or a DROP TABLE, and we have\nnothing to lose and everything to gain by changing the rules for\nthose.\n\n> ISTM that some of what you write below would be addressed, at least partially,\n> by the stats I proposed above. Particularly keeping some \"page granularity\"\n> instead of \"tuple granularity\" stats seems helpful.\n\nThat definitely could be true, but I think that my main concern is\nthat we completely rely on randomly sampled statistics (except with\nantiwraparound autovacuums, which happen on a schedule that has\nproblems of its own).\n\n> > It's quite possible to get approximately the desired outcome with an\n> > algorithm that is completely wrong -- the way that we sometimes need\n> > autovacuum_freeze_max_age to deal with bloat is a great example of\n> > that.\n>\n> Yea. I think this is part of I like my idea about tracking more observations\n> made by the last vacuum - they're quite easy to get right, and they\n> self-correct, rather than potentially ending up causing ever-wronger stats.\n\nI definitely think that there is a place for that. It has the huge\nadvantage of lessening our reliance on random sampling.\n\n> Right. I think it's fundamental that we get a lot better estimates about the\n> amount of work needed. Without that we have no chance of finishing autovacuums\n> before problems become too big.\n\nI like the emphasis on bounding the work required, so that it can be\nspread out, rather than trying to predict dead tuples. Again, we\nshould focus on avoiding disaster.\n\n> And serialized vacuum state won't help, because that still requires\n> vacuum to scan all the !all-visible pages to discover them. Most of which\n> won't contain dead tuples in a lot of workloads.\n\nThe main advantage of that model is that it decides what to do and\nwhen to do it based on the actual state of the table (or the state in\nthe recent past). If we see a concentration of LP_DEAD items, then we\ncan hurry up index vacuuming. If not, maybe we'll take our time.\nAgain, less reliance on random sampling is a very good thing. More\ndynamic decisions that are made at runtime, and delayed for as long as\npossible, just seems much more promising than having better stats that\nare randomly sampled.\n\n> I don't feel like I have a good handle on what could work for 16 and what\n> couldn't. Personally I think something like autovacuum_no_auto_cancel_age\n> would be an improvement, but I also don't quite feel satisfied with it.\n\nI don't either; but it should be strictly less unsatisfactory.\n\n> Tracking the number of autovac failures seems uncontroverial and quite\n> beneficial, even if the rest doesn't make it in. It'd at least let users\n> monitor for tables where autovac is likely to swoop in in anti-wraparound\n> mode.\n\nI'll see what I can come up with.\n\n> Perhaps its worth to separately track the number of times a backend would have\n> liked to cancel autovac, but couldn't due to anti-wrap? If changes to the\n> no-auto-cancel behaviour don't make it in, it'd at least allow us to collect\n> more data about the prevalence of the problem and in what situations it\n> occurs? Even just adding some logging for that case seems like it'd be an\n> improvement.\n\nHmm, maybe.\n\n> I think with a bit of polish \"Add autovacuum trigger instrumentation.\" ought\n> to be quickly mergeable.\n\nYeah, I'll try to get that part out of the way quickly.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Jan 2023 12:08:01 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 14:57:27 -0500, Robert Haas wrote:\n> > In pg_class:\n> >\n> > - The number of all frozen pages, like we do for all-visible\n> >\n> > That'd give us a decent upper bound for the amount of work we need to do to\n> > increase relfrozenxid. It's important that this is crash safe (thus no\n> > pg_stats), and it only needs to be written when we'd likely make other\n> > changes to the pg_class row anyway.\n> \n> I'm not sure how useful this is because a lot of the work is from\n> scanning the indexes.\n\nWe can increase relfrozenxid before the index scans, unless we ran out of dead\ntuple space. We already have code for that in failsafe mode, in some way. But\nwe really also should increase\n\n\n> > In pgstats:\n> >\n> > - The number of dead items, incremented both by the heap scan and\n> > opportunistic pruning\n> >\n> > This would let us estimate how urgently we need to clean up indexes.\n> \n> I don't think this is true because btree indexes are self-cleaning in\n> some scenarios and not in others.\n\nI mainly meant it from the angle of whether need to clean up dead items in the\nheap to avoid the table from bloating because we stop using those pages -\nwhich requires index scans. But even for the index scan portion, it'd give us\na better bound than we have today.\n\nWe probably should track the number of killed tuples in indexes.\n\n\n> > - The xid/mxid horizons during the last vacuum\n> >\n> > - The number of pages with tuples that couldn't removed due to the horizon\n> > during the last vacuum\n> >\n> > - The number of pages with tuples that couldn't be frozen\n> \n> Not bad to know, but if the horizon we could use advances by 1, we\n> can't tell whether that allows pruning nothing additional or another\n> billion tuples.\n\nSure. But it'd be a lot better than scanning it again and again when nothing\nhas changed because thing still holds back the horizon. We could improve upon\nit later by tracking the average or even bins of ages.\n\n\n> However, where XID age really falls down as a metric is that it\n> doesn't tell you what it's going to take to solve the problem. The\n> answer, at ten thousand feet, is always vacuum. But how long will that\n> vacuum run? We don't know. Do we need the XID horizon to advance\n> first, and if so, how far? We don't know. Do we need auto-cancellation\n> to be disabled? We don't know. That's where we get into a lot of\n> trouble here.\n\nAgreed. I think the metrics I proposed would help some, by at least providing\nsensible upper boundaries (for work) and minimal requirements (horizon during\nlast vacuum).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 12:14:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 3:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you assume that there is chronic undercounting of dead tuples\n> (which I think is very common), ...\n\nWhy do you think that?\n\n> How many dead heap-only tuples are equivalent to one LP_DEAD item?\n> What about page-level concentrations, and the implication for\n> line-pointer bloat? I don't have a good answer to any of these\n> questions myself.\n\nSeems a bit pessimistic. If we had unlimited resources and all\noperations were infinitely fast, the optimal strategy would be to\nvacuum after every insert, update, or delete. But in reality, that\nwould be prohibitively expensive, so we're making a trade-off.\nBatching together cleanup for many table modifications reduces the\namortized cost of cleaning up after one such operation very\nconsiderably. That's critical. But if we batch too much together, then\nthe actual cleanup doesn't happen soon enough to keep us out of\ntrouble.\n\nIf we had an oracle that could provide us with perfect information,\nwe'd ask it, among other things, how much work will be required to\nvacuum right now, and how much benefit would we get out of doing so.\nThe dead tuple count is related to the first question. It's not a\ndirect, linear relationship, but it's not completely unrelated,\neither. Maybe we could refine the estimates by gathering more or\ndifferent statistics than we do now, but ultimately it's always going\nto be a trade-off between getting the work done sooner (and thus maybe\npreventing table growth or a wraparound shutdown) and being able to do\nmore work at once (and thus being more efficient). The current system\nset of counters predates HOT and the visibility map, so it's not\nsurprising if needs updating, but if you're argue that the whole\nconcept is just garbage, I think that's an overreaction.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Jan 2023 17:11:07 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 2:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jan 17, 2023 at 3:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > If you assume that there is chronic undercounting of dead tuples\n> > (which I think is very common), ...\n>\n> Why do you think that?\n\nFor the reasons I gave about statistics, random sampling, the central\nlimit theorem. All that stuff. This matches the experience of Andres.\nAnd is obviously the only explanation behind the reliance on\nantiwraparound autovacuums for cleaning up bloat in larger OLTP\ndatabases. It just fits: the dead tuples approach can sometimes be so\ncompletely wrong that even an alternative triggering condition based\non something that is virtually unrelated to the thing we actually care\nabout can do much better in practice. Consistently, reliably, for a\ngiven table/workload.\n\n> > How many dead heap-only tuples are equivalent to one LP_DEAD item?\n> > What about page-level concentrations, and the implication for\n> > line-pointer bloat? I don't have a good answer to any of these\n> > questions myself.\n>\n> Seems a bit pessimistic. If we had unlimited resources and all\n> operations were infinitely fast, the optimal strategy would be to\n> vacuum after every insert, update, or delete. But in reality, that\n> would be prohibitively expensive, so we're making a trade-off.\n\nTo a large degree, that's my point. I don't know how to apply this\ninformation, so having detailed information doesn't seem like the main\nproblem.\n\n> If we had an oracle that could provide us with perfect information,\n> we'd ask it, among other things, how much work will be required to\n> vacuum right now, and how much benefit would we get out of doing so.\n\nAnd then what would we do? What about costs?\n\nEven if we were omniscient, we still wouldn't be omnipotent. We're\nstill subject to the laws of physics. VACUUM would still be something\nthat more or less works at the level of the whole table, or not at\nall. So being omniscient seems kinda overrated to me. Adding more\ninformation does not in general lead to better outcomes.\n\n> The dead tuple count is related to the first question. It's not a\n> direct, linear relationship, but it's not completely unrelated,\n> either. Maybe we could refine the estimates by gathering more or\n> different statistics than we do now, but ultimately it's always going\n> to be a trade-off between getting the work done sooner (and thus maybe\n> preventing table growth or a wraparound shutdown) and being able to do\n> more work at once (and thus being more efficient). The current system\n> set of counters predates HOT and the visibility map, so it's not\n> surprising if needs updating, but if you're argue that the whole\n> concept is just garbage, I think that's an overreaction.\n\nWhat I'm arguing is that principally relying on any one thing is\ngarbage. If you have only one thing that creates pressure to VACUUM\nthen there can be a big impact whenever it turns out to be completely\nwrong. Whereas if VACUUM can run because of (say) 3 moderate signals\ntaken together, then it's much less likely that we'll be completely\nwrong. In general my emphasis is on avoiding disaster in all its\nforms. Vacuuming somewhat early more often is perhaps suboptimal, but\nfar from a disaster. It's the kind of thing that we can manage.\n\nBy all means, let's make the dead tuples/dead items stuff less naive\n(e.g. make it distinguish between LP_DEAD items and dead heap-only\ntuples). But even then, we shouldn't continue to completely rely on it\nin the way that we do right now. In other words, I'm fine with adding\nmore information that is more accurate as long as we don't continue to\nmake the mistake of not treating it kinda suspect, and certainly not\nsomething to completely rely on if at all possible. In particular, we\nneed to think about both costs and benefits at all times.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Jan 2023 14:56:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 5:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Why do you think that?\n>\n> For the reasons I gave about statistics, random sampling, the central\n> limit theorem. All that stuff. This matches the experience of Andres.\n> And is obviously the only explanation behind the reliance on\n> antiwraparound autovacuums for cleaning up bloat in larger OLTP\n> databases. It just fits: the dead tuples approach can sometimes be so\n> completely wrong that even an alternative triggering condition based\n> on something that is virtually unrelated to the thing we actually care\n> about can do much better in practice. Consistently, reliably, for a\n> given table/workload.\n\nHmm, I don't know. I have no intuition one way or the other for\nwhether we're undercounting dead tuples, and I don't understand what\nwould cause us to do that. I thought that we tracked that accurately,\nas part of the statistics system, not by sampling\n(pg_stat_all_tables.n_dead_tup).\n\nBut, I think there are a number of other explanations for why we tend\nto rely on antiwraparound vacuums more than we should.\nAuto-cancellation. Skipping tables that are locked, or pages that are\npinned. A cost limit that is too low relative to the size of the\ndatabase, so that eventually all tables are in wraparound danger all\nthe time. The fact that we can vacuum tables uselessly, without\naccomplishing anything, because the XID horizon is too new, but we\ndon't know that so we just try to vacuum anyway. And then we repeat\nthat useless work in an infinite loop. The fact that the system's idea\nof when a vacuum needs to happen grows with\nautovacuum_vacuum_scale_factor, but that actually gets too big too\nfast, so that eventually it never triggers vacuuming at all, or at\nleast not before XID age does.\n\nI think we ought to fire autovacuum_vacuum_scale_factor out of an\nairlock. It's not the right model, and I think many people have been\naware that it's not the right model for a decade, and we haven't been\ncreative enough to come up with anything better. We *know* that you\nhave to lower this value for large tables or they just don't get\nvacuumed often enough. That means we have some idea how often they\nought to be vacuumed. I'm sure I'm not the person who has the best\nintuition on that point, but I bet people who have been responsible\nfor large production systems have some decent ideas in that area. We\nshould find out what that intuition is and come up with a new formula\nthat matches the intuition of people with experience doing this sort\nof thing.\n\ne.g.\n\n1. When computing autovacuum_vacuum_threshold + table_size *\nautovacuum_vacuum_scale_factor, if the result exceeds the value of a\nnew parameter autovacuum_vacuum_maximum_threshold, then clamp the\nresult to that value.\n\n2. When computing autovacuum_vacuum_threshold + table_size *\nautovacuum_vacuum_scale_factor, if the result exceeds 80% of the\nnumber of dead TIDs we could store, clamp it to that number.\n\n3. Change the formula altogether to use a square root or a cube root\nor a logarithm somewhere.\n\nI think we also ought to invent some sort of better cost limit system\nthat doesn't shoot you in the foot automatically as the database\ngrows. Nobody actually wants to limit the rate at which the database\nvacuums stuff to a constant. What they really want to do is limit it\nto a rate that is somewhat faster than the minimum rate needed to\navoid disaster. We should try to develop metrics for whether vacuum is\nkeeping up. I think one good one would be duty cycle -- if we have N\nvacuum workers, then over a period of K seconds we could have done as\nmuch as N*K process-seconds of vacuum work, and as little as 0. So\nfigure out how many seconds of vacuum work we actually did, and divide\nthat by N*K to get a percentage. If it's over, say, 90%, then we are\nnot keeping up. We should dynamically raise the cost limit until we\ndo. And drop it back down later when things look better.\n\nI don't actually see any reason why dead tuples, even counted in a\nrelatively stupid way, isn't fundamentally good enough to get all\ntables vacuumed before we hit the XID age cutoff. It doesn't actually\ndo that right now, but I feel like that must be because we're doing\nother stupid things, not because there's anything that terrible about\nthe metric as such. Maybe that's wrong, but I find it hard to imagine.\nIf I imagine a world where vacuum always gets started when the number\nof dead tuples hits some reasonable bound (rather than the\nunreasonable bound that the scale factor stuff computes) and it always\ncleans up those dead tuples (instead of doing a lot of work to clean\nup nothing at all, or doing a lot of work to clean up only a small\nfraction of those dead tuples, or cancelling itself, or skipping the\ntable that has the problem because it's locked, or running with an\nunreasonably low cost limit, or otherwise being unable to GET THE JOB\nDONE) then how do we ever reach autovacuum_freeze_max_age? I think it\nwould still be possible, but only if the XID consumption rate of the\nserver is so high that we chunk through 300 million XIDs in the time\nit takes to perform an un-throttled vacuum of the table. I think\nthat's a real threat and will probably be a bigger one in ten years,\nbut it's only one of many things that are going wrong right now.\n\n> Even if we were omniscient, we still wouldn't be omnipotent.\n\nA sound theological point!\n\n> We're\n> still subject to the laws of physics. VACUUM would still be something\n> that more or less works at the level of the whole table, or not at\n> all. So being omniscient seems kinda overrated to me. Adding more\n> information does not in general lead to better outcomes.\n\nYeah, I think that's true. In particular, it's not much use being\nomniscient but stupid. It would be better to have limited information\nand be smart about what you did with it.\n\n> What I'm arguing is that principally relying on any one thing is\n> garbage. If you have only one thing that creates pressure to VACUUM\n> then there can be a big impact whenever it turns out to be completely\n> wrong. Whereas if VACUUM can run because of (say) 3 moderate signals\n> taken together, then it's much less likely that we'll be completely\n> wrong. In general my emphasis is on avoiding disaster in all its\n> forms. Vacuuming somewhat early more often is perhaps suboptimal, but\n> far from a disaster. It's the kind of thing that we can manage.\n\nTrue, although it can be overdone. An extra vacuum on a big table with\nsome large indexes that end up getting scanned can be very expensive\neven if the table itself is almost entirely all-visible. We can't\nafford to make too many mistakes in the direction of vacuuming early\nin such cases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Jan 2023 10:54:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": ".\n\nOn Wed, Jan 18, 2023 at 7:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > It just fits: the dead tuples approach can sometimes be so\n> > completely wrong that even an alternative triggering condition based\n> > on something that is virtually unrelated to the thing we actually care\n> > about can do much better in practice. Consistently, reliably, for a\n> > given table/workload.\n>\n> Hmm, I don't know. I have no intuition one way or the other for\n> whether we're undercounting dead tuples, and I don't understand what\n> would cause us to do that. I thought that we tracked that accurately,\n> as part of the statistics system, not by sampling\n> (pg_stat_all_tables.n_dead_tup).\n\nIt's both, kind of.\n\npgstat_report_analyze() will totally override the\ntabentry->dead_tuples information that drives autovacuum.c, based on\nan estimate derived from a random sample -- which seems to me to be an\napproach that just doesn't have any sound theoretical basis. So while\nthere is a sense in which we track dead tuples incrementally and\naccurately using the statistics system, we occasionally call\npgstat_report_analyze (and pgstat_report_vacuum) like this, so AFAICT\nwe might as well not even bother tracking things reliably the rest of\nthe time.\n\nRandom sampling works because the things that you don't sample are\nvery well represented by the things that you do sample. That's why\neven very stale optimizer statistics can work quite well (and why the\nEAV anti-pattern makes query optimization impossible) -- the\ndistribution is often fixed, more or less. The statistics generalize\nvery well because the data meets certain underlying assumptions that\nall data stored in a relational database is theoretically supposed to\nmeet. Whereas with dead tuples, the whole point is to observe and\ncount dead tuples so that autovacuum can then go remove the dead\ntuples -- which then utterly changes the situation! That's a huge\ndifference.\n\nISTM that you need a *totally* different approach for something that's\nfundamentally dynamic, which is what this really is. Think about how\nthe random sampling will work in a very large table with concentrated\nupdates. The modified pages need to outweigh the large majority of\npages in the table that can be skipped by VACUUM anyway.\n\nI wonder how workable it would be to just teach pgstat_report_analyze\nand pgstat_report_vacuum to keep out of this, or to not update the\nstats unless it's to increase the number of dead_tuples...\n\n> I think we ought to fire autovacuum_vacuum_scale_factor out of an\n> airlock.\n\nCouldn't agree more. I think that this and the underlying statistics\nare the really big problem as far as under-vacuuming is concerned.\n\n> I think we also ought to invent some sort of better cost limit system\n> that doesn't shoot you in the foot automatically as the database\n> grows. Nobody actually wants to limit the rate at which the database\n> vacuums stuff to a constant. What they really want to do is limit it\n> to a rate that is somewhat faster than the minimum rate needed to\n> avoid disaster. We should try to develop metrics for whether vacuum is\n> keeping up.\n\nDefinitely agree that doing some kind of dynamic updating is\npromising. What we thought at the start versus what actually happened.\nSomething cyclic, just like autovacuum itself.\n\n> I don't actually see any reason why dead tuples, even counted in a\n> relatively stupid way, isn't fundamentally good enough to get all\n> tables vacuumed before we hit the XID age cutoff. It doesn't actually\n> do that right now, but I feel like that must be because we're doing\n> other stupid things, not because there's anything that terrible about\n> the metric as such. Maybe that's wrong, but I find it hard to imagine.\n\nOn reflection, maybe you're right here. Maybe it's true that the\nbigger problem is just that the implementation is bad, even on its own\nterms -- since it's pretty bad! Hard to say at this point.\n\nDepends on how you define it, too. Statistically sampling is just not\nfit for purpose here. But is that a problem with\nautovacuum_vacuum_scale_factor? I may have said words that could\nreasonably be interpreted that way, but I'm not prepared to blame it\non the underlying autovacuum_vacuum_scale_factor model now. It's\nfuzzy.\n\n> > We're\n> > still subject to the laws of physics. VACUUM would still be something\n> > that more or less works at the level of the whole table, or not at\n> > all. So being omniscient seems kinda overrated to me. Adding more\n> > information does not in general lead to better outcomes.\n>\n> Yeah, I think that's true. In particular, it's not much use being\n> omniscient but stupid. It would be better to have limited information\n> and be smart about what you did with it.\n\nI would put it like this: autovacuum shouldn't ever be a sucker. It\nshould pay attention to disconfirmatory signals. The information that\ndrives its decision making process should be treated as provisional.\n\nEven if the information was correct at one point, the contents of the\ntable are constantly changing in a way that could matter enormously.\nSo we should be paying attention to where the table is going -- and\neven where it might be going -- not just where it is, or was.\n\n> True, although it can be overdone. An extra vacuum on a big table with\n> some large indexes that end up getting scanned can be very expensive\n> even if the table itself is almost entirely all-visible. We can't\n> afford to make too many mistakes in the direction of vacuuming early\n> in such cases.\n\nNo, but we can afford to make some -- and can detect when it happened\nafter the fact. I would rather err on the side of over-vacuuming,\nespecially if the system is smart enough to self-correct when that\nturns out to be the wrong approach. One of the advantages of running\nVACUUM sooner is that it provides us with relatively reliable\ninformation about the needs of the table.\n\nWe can also cheat, sort of. If we find another justification for\nautovacuuming (e.g., it's a quiet time for the system as a whole), and\nit works out to help with this other problem, it may be just as good\nfor users.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 10:30:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> pgstat_report_analyze() will totally override the\n> tabentry->dead_tuples information that drives autovacuum.c, based on\n> an estimate derived from a random sample -- which seems to me to be an\n> approach that just doesn't have any sound theoretical basis.\n\nYikes. I think we don't have a choice but to have a method to correct\nthe information somehow, because AFAIK the statistics system is not\ncrash-safe. But that approach does seem to carry significant risk of\noverwriting correct information with wrong information.\n\n> On reflection, maybe you're right here. Maybe it's true that the\n> bigger problem is just that the implementation is bad, even on its own\n> terms -- since it's pretty bad! Hard to say at this point.\n>\n> Depends on how you define it, too. Statistically sampling is just not\n> fit for purpose here. But is that a problem with\n> autovacuum_vacuum_scale_factor? I may have said words that could\n> reasonably be interpreted that way, but I'm not prepared to blame it\n> on the underlying autovacuum_vacuum_scale_factor model now. It's\n> fuzzy.\n\nYep. I think what we should try to evaluate is which number is\nfurthest from the truth. My guess is that the threshold is so high\nrelative to what a reasonable value would be that you can't get any\nbenefit out of making the dead tuple count more accurate. Like, if the\nthreshold is 100x too high, or something, then who cares how accurate\nthe dead tuples number is? It's going to be insufficient to trigger\nvacuuming whether it's right or wrong. We should try substituting a\nless-bogus threshold calculation and see what happens then. An\nalternative theory is that the threshold is fine and we're only\nfailing to reach it because the dead tuple calculation is so\ninaccurate. Maybe that's even true in some scenarios, but I bet that\nit's never the issue when people have really big tables. The fact that\nI'm OK with 10MB of bloat in my 100MB table doesn't mean I'm OK with\n1TB of bloat in my 10TB table. Among other problems, I can't even\nvacuum away that much bloat in one index pass, because autovacuum\ncan't use enough work memory for that. Also, the absolute space\nwastage matters.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Jan 2023 14:02:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 11:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > pgstat_report_analyze() will totally override the\n> > tabentry->dead_tuples information that drives autovacuum.c, based on\n> > an estimate derived from a random sample -- which seems to me to be an\n> > approach that just doesn't have any sound theoretical basis.\n>\n> Yikes. I think we don't have a choice but to have a method to correct\n> the information somehow, because AFAIK the statistics system is not\n> crash-safe. But that approach does seem to carry significant risk of\n> overwriting correct information with wrong information.\n\nThis situation is really quite awful, so maybe we should do something\nabout it soon, in the scope of the Postgres 16 work on autovacuum that\nis already underway. In fact I think that the problem here is so bad\nthat even something slightly less naive would be far more effective.\n\nYou're right to point out that pgstat_report_analyze needs to update\nthe stats in case there is a hard crash, of course. But there is\nplenty of context with which to make better decisions close at hand.\nFor example, I bet that pgstat_report_analyze already does a pretty\ngood job of estimating live_tuples -- my spiel about statistics mostly\ndoesn't apply to live_tuples. Suppose that we notice that its new\nestimate for live_tuples approximately matches what the stats\nsubsystem already thought about live_tuples, while dead_tuples is far\nfar lower. We shouldn't be so credulous as to believe the new\ndead_tuples estimate at that point.\n\nPerhaps we can change nothing about dead_tuples at all when this\nhappens. Or perhaps we can set dead_tuples to a value that is scaled\nfrom the old estimate. The new dead_tuples value could be derived by\ntaking the ratio of the old live_tuples to the old dead_tuples, and\nthen using that to scale from the new live_tuples. This is just a\nfirst pass, to see what you and others think. Even very simple\nheuristics seem like they could make things much better.\n\nAnother angle of attack is the PD_ALL_VISIBLE page-level bit, which\nacquire_sample_rows() could pay attention to -- especially in larger\ntables, where the difference between all pages and just the\nall-visible subset of pages is most likely to matter. The more sampled\npages that had PD_ALL_VISIBLE set, the less credible the new\ndead_tuples estimate will be (relative to existing information), and\nso pgstat_report_analyze() should prefer the new estimate over the old\none in proportion to that.\n\nWe probably shouldn't change anything about pgstat_report_vacuum as\npart of this effort to make pgstat_report_analyze less terrible in the\nnear term. It certainly has its problems (what is true for pages that\nVACUUM scanned at the end of VACUUM is far from representative for new\npages!), it's probably much less of a contributor to issues like those\nthat Andres reports seeing.\n\nBTW, one of the nice things about the insert-driven autovacuum stats\nis that pgstat_report_analyze doesn't have an opinion about how many\ntuples were inserted since the last VACUUM ran. It does have other\nproblems, but they seem less serious to me.\n\n> Yep. I think what we should try to evaluate is which number is\n> furthest from the truth. My guess is that the threshold is so high\n> relative to what a reasonable value would be that you can't get any\n> benefit out of making the dead tuple count more accurate. Like, if the\n> threshold is 100x too high, or something, then who cares how accurate\n> the dead tuples number is?\n\nRight. Or if we don't make any reasonable distinction between LP_DEAD\nitems and dead heap-only tuples, then the total number of both things\ntogether may matter very little. Better to be approximately correct\nthan exactly wrong. Deliberately introducing a bias to lower the\nvariance is a perfectly valid approach.\n\n> Maybe that's even true in some scenarios, but I bet that\n> it's never the issue when people have really big tables. The fact that\n> I'm OK with 10MB of bloat in my 100MB table doesn't mean I'm OK with\n> 1TB of bloat in my 10TB table. Among other problems, I can't even\n> vacuum away that much bloat in one index pass, because autovacuum\n> can't use enough work memory for that. Also, the absolute space\n> wastage matters.\n\nI certainly agree with all that.\n\nFWIW, part of my mental model with VACUUM is that the rules kind of\nchange in the case of a big table. We're far more vulnerable to issues\nsuch as (say) waiting for cleanup locks because the overall cadence\nused by autovacuum is so infrequently relative to everything else.\nThere are more opportunities for things to go wrong, worse\nconsequences when they do go wrong, and greater potential for the\nproblems to compound.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 12:15:17 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 12:08:01 -0800, Peter Geoghegan wrote:\n> > I think that's not the fault of relfrozenxid as a trigger, but that we simply\n> > don't keep enough other stats. We should imo at least keep track of:\n> \n> If you assume that there is chronic undercounting of dead tuples\n> (which I think is very common), then of course anything that triggers\n> vacuuming is going to help with that problem -- it might be totally\n> inadequate, but still make the critical difference by not allowing the\n> system to become completely destabilized. I absolutely accept that\n> users that are relying on that exist, and that those users ought to\n> not have things get even worse -- I'm pragmatic. But overall, what we\n> should be doing is fixing the real problem, which is that the dead\n> tuples accounting is deeply flawed. Actually, it's not just that the\n> statistics are flat out wrong; the whole model is flat-out wrong.\n\nI think that depends on what \"whole model\" encompasses...\n\n\n> The assumptions that work well for optimizer statistics quite simply\n> do not apply here. Random sampling for this is just wrong, because\n> we're not dealing with something that follows a distribution that can\n> be characterized with a sufficiently large sample. With optimizer\n> statistics, the entire contents of the table is itself a sample taken\n> from the wider world -- so even very stale statistics can work quite\n> well (assuming that the schema is well normalized). Whereas the\n> autovacuum dead tuples stuff is characterized by constant change. I\n> mean of course it is -- that's the whole point! The central limit\n> theorem obviously just doesn't work for something like this -- we\n> cannot generalize from a sample, at all.\n\nIf we were to stop dropping stats after crashes, I think we likely could\nafford to stop messing with dead tuple stats during analyze. Right now it's\nvaluable to some degree because it's a way to reaosonably quickly recover from\nlost stats.\n\nThe main way to collect inserted / dead tuple info for autovacuum's benefit is\nvia the stats collected when making changes.\n\nWe probably ought to simply not update dead tuples after analyze if the stats\nentry has information about a prior [auto]vacuum. Or at least split the\nfields.\n\n\n> How many dead heap-only tuples are equivalent to one LP_DEAD item?\n> What about page-level concentrations, and the implication for\n> line-pointer bloat? I don't have a good answer to any of these\n> questions myself. And I have my doubts that there are *any* good\n> answers.\n\nHence my suggestion to track several of these via page level stats. In the big\npicture it doesn't really matter that much whether there's 10 or 100 (dead\ntuples|items) on a page that needs to be removed. It matters that the page\nneeds to be processed.\n\n\n> Even these questions are the wrong questions (they're just less wrong).\n\nI don't agree. Nothing is going to be perfect, but you're not going to be able\nto do sensible vacuum scheduling without some stats, and it's fine if those\nare an approximation, as long as the approximation makes some sense.\n\n\n> I'd like to use the visibility map more for stuff here, too. It is\n> totally accurate about all-visible/all-frozen pages, so many of my\n> complaints about statistics don't really apply. Or need not apply, at\n> least. If 95% of a table's pages are all-frozen in the VM, then of\n> course it's pretty unlikely to be the right time to VACUUM the table\n> if it's to clean up bloat -- this is just about the most reliable\n> information we have access to.\n\nI think querying that from stats is too expensive for most things. I suggested\ntracking all-frozen in pg_class. Perhaps we should also track when pages are\n*removed* from the VM in pgstats, I don't think we do today. That should give\na decent picture?\n\n\n\n> > > This sounds like a great argument in favor of suspend-and-resume as a\n> > > way of handling autocancellation -- no useful work needs to be thrown\n> > > away for AV to yield for a minute or two.\n> \n> > Hm, that seems a lot of work. Without having held a lock you don't even know\n> > whether your old dead items still apply. Of course it'd improve the situation\n> > substantially, if we could get it.\n> \n> I don't think it's all that much work, once the visibility map\n> snapshot infrastructure is there.\n> \n> Why wouldn't your old dead items still apply?\n\nWell, for one the table could have been rewritten. Of course we can add the\ncode to deal with that, but it is definitely something to be aware of. There\nmight also be some oddities around indexes getting added / removed.\n\n\n\n> > > Yeah, that's pretty bad. Maybe DROP TABLE and TRUNCATE should be\n> > > special cases? Maybe they should always be able to auto cancel an\n> > > autovacuum?\n> >\n> > Yea, I think so. It's not obvious how to best pass down that knowledge into\n> > ProcSleep(). It'd have to be in the LOCALLOCK, I think. Looks like the best\n> > way would be to change LockAcquireExtended() to get a flags argument instead\n> > of reportMemoryError, and then we could add LOCK_ACQUIRE_INTENT_DROP &\n> > LOCK_ACQUIRE_INTENT_TRUNCATE or such. Then the same for\n> > RangeVarGetRelidExtended(). It already \"customizes\" how to lock based on RVR*\n> > flags.\n> \n> It would be tricky, but still relatively straightforward compared to\n> other things. It is often a TRUNCATE or a DROP TABLE, and we have\n> nothing to lose and everything to gain by changing the rules for\n> those.\n\nProbably should also change the rules for VACUUM and VACUUM FULL / CLUSTER, if\nwe do it. Manual VACUUM will often be faster due to the cost limits, and\nVACUUM FULL can be *considerably* faster than VACUUM once you hit bad bloat.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Jan 2023 12:19:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 3:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Suppose that we notice that its new\n> estimate for live_tuples approximately matches what the stats\n> subsystem already thought about live_tuples, while dead_tuples is far\n> far lower. We shouldn't be so credulous as to believe the new\n> dead_tuples estimate at that point.\n>\n> Another angle of attack is the PD_ALL_VISIBLE page-level bit, which\n> acquire_sample_rows() could pay attention to -- especially in larger\n> tables, where the difference between all pages and just the\n> all-visible subset of pages is most likely to matter. The more sampled\n> pages that had PD_ALL_VISIBLE set, the less credible the new\n> dead_tuples estimate will be (relative to existing information), and\n> so pgstat_report_analyze() should prefer the new estimate over the old\n> one in proportion to that.\n\nI don't know enough about the specifics of how this works to have an\nintelligent opinion about how likely these particular ideas are to\nwork out. However, I think it's risky to look at estimates and try to\ninfer whether they are reliable. It's too easy to be wrong. What we\nreally want to do is anchor our estimates to some data source that we\nknow we can trust absolutely. If you trust possibly-bad data less, it\nscrews up your estimates more slowly, but it still screws them up.\n\nIf Andres is correct that what really matter is the number of pages\nwe're going to have to dirty, we could abandon counting dead tuples\naltogether and just count not-all-visible pages in the VM map. That\nwould be cheap enough to recompute periodically. However, it would\nalso be a big behavior change from the way we do things now, so I'm\nnot sure it's a good idea. Still, a quantity that we can be certain\nwe're measuring accurately is better than one we can't measure\naccurately even if it's a somewhat worse proxy for the thing we really\ncare about. There's a ton of value in not being completely and totally\nwrong.\n\n> FWIW, part of my mental model with VACUUM is that the rules kind of\n> change in the case of a big table. We're far more vulnerable to issues\n> such as (say) waiting for cleanup locks because the overall cadence\n> used by autovacuum is so infrequently relative to everything else.\n> There are more opportunities for things to go wrong, worse\n> consequences when they do go wrong, and greater potential for the\n> problems to compound.\n\nYes. A lot of parts of PostgreSQL, including this one, were developed\na long time ago when PostgreSQL databases were a lot smaller than they\noften are today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Jan 2023 15:43:48 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 12:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't know enough about the specifics of how this works to have an\n> intelligent opinion about how likely these particular ideas are to\n> work out. However, I think it's risky to look at estimates and try to\n> infer whether they are reliable. It's too easy to be wrong. What we\n> really want to do is anchor our estimates to some data source that we\n> know we can trust absolutely. If you trust possibly-bad data less, it\n> screws up your estimates more slowly, but it still screws them up.\n\nSome of what I'm proposing arguably amounts to deliberately adding a\nbias. But that's not an unreasonable thing in itself. I think of it as\nrelated to the bias-variance tradeoff, which is a concept that comes\nup a lot in machine learning and statistical inference.\n\nWe can afford to be quite imprecise at times, especially if we choose\na bias that we know has much less potential to do us harm -- some\nmistakes hurt much more than others. We cannot afford to ever be\ndramatically wrong, though -- especially in the direction of vacuuming\nless often.\n\nBesides, there is something that we *can* place a relatively high\ndegree of trust in that will still be in the loop here: VACUUM itself.\nIf VACUUM runs then it'll call pgstat_report_vacuum(), which will set\nthe record straight in the event of over estimating dead tuples. To\nsome degree the problem of over estimating dead tuples is\nself-limiting.\n\n> If Andres is correct that what really matter is the number of pages\n> we're going to have to dirty, we could abandon counting dead tuples\n> altogether and just count not-all-visible pages in the VM map.\n\nThat's what matters most from a cost point of view IMV. So it's a big\npart of the overall picture, but not everything. It tells us\nrelatively little about the benefits, except perhaps when most pages\nare all-visible.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:02:15 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 12:15:17 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 11:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > pgstat_report_analyze() will totally override the\n> > > tabentry->dead_tuples information that drives autovacuum.c, based on\n> > > an estimate derived from a random sample -- which seems to me to be an\n> > > approach that just doesn't have any sound theoretical basis.\n> >\n> > Yikes. I think we don't have a choice but to have a method to correct\n> > the information somehow, because AFAIK the statistics system is not\n> > crash-safe. But that approach does seem to carry significant risk of\n> > overwriting correct information with wrong information.\n\nI suggested nearby to only have ANALYZE dead_tuples it if there's been no\n[auto]vacuum since the stats entry was created. That allows recovering from\nstats resets, be it via crashes or explicitly. What do you think?\n\nTo add insult to injury, we overwrite accurate information gathered by VACUUM\nwith bad information gathered by ANALYZE if you do VACUUM ANALYZE.\n\n\n\nOne complicating factor is that VACUUM sometimes computes an incrementally\nmore bogus n_live_tup when it skips pages due to the VM, whereas ANALYZE\ncomputes something sane. I unintentionally encountered one when I was trying\nsomething while writing this email, reproducer attached.\n\n\nVACUUM (DISABLE_PAGE_SKIPPING) foo;\nSELECT n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE relid = 'foo'::regclass;\n┌────────────┬────────────┐\n│ n_live_tup │ n_dead_tup │\n├────────────┼────────────┤\n│ 9000001 │ 500000 │\n└────────────┴────────────┘\n\nafter one VACUUM:\n┌────────────┬────────────┐\n│ n_live_tup │ n_dead_tup │\n├────────────┼────────────┤\n│ 8549905 │ 500000 │\n└────────────┴────────────┘\n\nafter 9 more VACUUMs:\n┌────────────┬────────────┐\n│ n_live_tup │ n_dead_tup │\n├────────────┼────────────┤\n│ 5388421 │ 500000 │\n└────────────┴────────────┘\n(1 row)\n\n\nI briefly tried it out, and it does *not* reproduce in 11, but does in\n12. Haven't dug into what the cause is, but we probably use the wrong\ndenominator somewhere...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 18 Jan 2023 13:08:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 1:02 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Some of what I'm proposing arguably amounts to deliberately adding a\n> bias. But that's not an unreasonable thing in itself. I think of it as\n> related to the bias-variance tradeoff, which is a concept that comes\n> up a lot in machine learning and statistical inference.\n\nTo be clear, I was thinking of unreservedly trusting what\npgstat_report_analyze() says about dead_tuples in the event of its\nestimate increasing our dead_tuples estimate, while being skeptical\n(to a varying degree) when it's the other way around.\n\nBut now I need to go think about what Andres just brought up...\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:12:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 13:08:44 -0800, Andres Freund wrote:\n> One complicating factor is that VACUUM sometimes computes an incrementally\n> more bogus n_live_tup when it skips pages due to the VM, whereas ANALYZE\n> computes something sane. I unintentionally encountered one when I was trying\n> something while writing this email, reproducer attached.\n> \n> \n> VACUUM (DISABLE_PAGE_SKIPPING) foo;\n> SELECT n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE relid = 'foo'::regclass;\n> ┌────────────┬────────────┐\n> │ n_live_tup │ n_dead_tup │\n> ├────────────┼────────────┤\n> │ 9000001 │ 500000 │\n> └────────────┴────────────┘\n> \n> after one VACUUM:\n> ┌────────────┬────────────┐\n> │ n_live_tup │ n_dead_tup │\n> ├────────────┼────────────┤\n> │ 8549905 │ 500000 │\n> └────────────┴────────────┘\n> \n> after 9 more VACUUMs:\n> ┌────────────┬────────────┐\n> │ n_live_tup │ n_dead_tup │\n> ├────────────┼────────────┤\n> │ 5388421 │ 500000 │\n> └────────────┴────────────┘\n> (1 row)\n> \n> \n> I briefly tried it out, and it does *not* reproduce in 11, but does in\n> 12. Haven't dug into what the cause is, but we probably use the wrong\n> denominator somewhere...\n\nOh, it does actually reproduce in 11 too - my script just didn't see it\nbecause it was \"too fast\". For some reason < 12 it takes longer for the new\npgstat snapshot to be available. If I add a few sleeps, it shows in 11.\n\nThe real point of change appears to be 10->11.\n\nThere's a relevant looking difference in the vac_estimate_reltuples call:\n10:\n\t/* now we can compute the new value for pg_class.reltuples */\n\tvacrelstats->new_rel_tuples = vac_estimate_reltuples(onerel, false,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t nblocks,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t vacrelstats->tupcount_pages,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t num_tuples);\n\n11:\n\t/* now we can compute the new value for pg_class.reltuples */\n\tvacrelstats->new_live_tuples = vac_estimate_reltuples(onerel,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t nblocks,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t vacrelstats->tupcount_pages,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t live_tuples);\nwhich points to:\n\ncommit 7c91a0364fcf5d739a09cc87e7adb1d4a33ed112\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2018-03-22 15:47:29 -0400\n\n Sync up our various ways of estimating pg_class.reltuples.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:42:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 1:08 PM Andres Freund <andres@anarazel.de> wrote:\n> I suggested nearby to only have ANALYZE dead_tuples it if there's been no\n> [auto]vacuum since the stats entry was created. That allows recovering from\n> stats resets, be it via crashes or explicitly. What do you think?\n\nI like that idea. It's far simpler than the kind of stuff I was\nthinking about, and probably just as effective. Even if it introduces\nsome unforeseen problem (which seems unlikely), we can still rely on\npgstat_report_vacuum() to set things straight before too long.\n\nAre you planning on writing a patch for this? I'd be very interested\nin seeing this through. Could definitely review it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:45:19 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 13:42:40 -0800, Andres Freund wrote:\n> The real point of change appears to be 10->11.\n>\n> There's a relevant looking difference in the vac_estimate_reltuples call:\n> 10:\n> \t/* now we can compute the new value for pg_class.reltuples */\n> \tvacrelstats->new_rel_tuples = vac_estimate_reltuples(onerel, false,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t nblocks,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t vacrelstats->tupcount_pages,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t num_tuples);\n>\n> 11:\n> \t/* now we can compute the new value for pg_class.reltuples */\n> \tvacrelstats->new_live_tuples = vac_estimate_reltuples(onerel,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t nblocks,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t vacrelstats->tupcount_pages,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t live_tuples);\n> which points to:\n>\n> commit 7c91a0364fcf5d739a09cc87e7adb1d4a33ed112\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: 2018-03-22 15:47:29 -0400\n>\n> Sync up our various ways of estimating pg_class.reltuples.\n\nThe problem with the change is here:\n\n\t/*\n\t * Okay, we've covered the corner cases. The normal calculation is to\n\t * convert the old measurement to a density (tuples per page), then\n\t * estimate the number of tuples in the unscanned pages using that figure,\n\t * and finally add on the number of tuples in the scanned pages.\n\t */\n\told_density = old_rel_tuples / old_rel_pages;\n\tunscanned_pages = (double) total_pages - (double) scanned_pages;\n\ttotal_tuples = old_density * unscanned_pages + scanned_tuples;\n\treturn floor(total_tuples + 0.5);\n\n\nBecause we'll re-scan the pages for not-yet-removable rows in subsequent\nvacuums, the next vacuum will process the same pages again. By using\nscanned_tuples = live_tuples, we basically remove not-yet-removable tuples\nfrom reltuples, each time.\n\nThe commit *did* try to account for that to some degree:\n\n+ /* also compute total number of surviving heap entries */\n+ vacrelstats->new_rel_tuples =\n+ vacrelstats->new_live_tuples + vacrelstats->new_dead_tuples;\n\n\nbut new_rel_tuples isn't used for pg_class.reltuples or pgstat.\n\n\nThis is pretty nasty. We use reltuples for a lot of things. And while analyze\nmight fix it sometimes, that won't reliably be the case, particularly when\nthere are repeated autovacuums due to a longrunning transaction - there's no\ncause for auto-analyze to trigger again soon, while autovacuum will go at it\nagain and again.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Jan 2023 14:22:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 2:22 PM Andres Freund <andres@anarazel.de> wrote:\n> The problem with the change is here:\n>\n> /*\n> * Okay, we've covered the corner cases. The normal calculation is to\n> * convert the old measurement to a density (tuples per page), then\n> * estimate the number of tuples in the unscanned pages using that figure,\n> * and finally add on the number of tuples in the scanned pages.\n> */\n> old_density = old_rel_tuples / old_rel_pages;\n> unscanned_pages = (double) total_pages - (double) scanned_pages;\n> total_tuples = old_density * unscanned_pages + scanned_tuples;\n> return floor(total_tuples + 0.5);\n\nMy assumption has always been that vac_estimate_reltuples() is prone\nto issues like this because it just doesn't have access to very much\ninformation each time it runs. It can only see the delta between what\nVACUUM just saw, and what the last VACUUM (or possibly the last\nANALYZE) saw according to pg_class. You're always going to find\nweaknesses in such a model if you go looking for them. You're always\ngoing to find a way to salami slice your way from good information to\ntotal nonsense, if you pick the right/wrong test case, which runs\nVACUUM in a way that allows whatever bias there may be to accumulate.\nIt's sort of like the way floating point values can become very\ninaccurate through a process that allows many small inaccuracies to\naccumulate over time.\n\nMaybe you're right to be concerned to the degree that you're concerned\n-- I'm not sure. I'm just adding what I see as important context.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 14:37:20 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 2:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Maybe you're right to be concerned to the degree that you're concerned\n> -- I'm not sure. I'm just adding what I see as important context.\n\nThe problems in this area tend to be that vac_estimate_reltuples()\nbehaves as if it sees a random sample, when in fact it's far from\nrandom -- it's the same scanned_pages as last time, and the ten other\ntimes before that. That's a feature of your test case, and other\nsimilar vac_estimate_reltuples test cases that I came up with in the\npast. Both scenarios involved skipping using the visibility map in\nmultiple successive VACUUM operations.\n\nPerhaps we should make vac_estimate_reltuples focus on the pages that\nVACUUM newly set all-visible each time (not including all-visible\npages that got scanned despite being all-visible) -- only that subset\nof scanned_pages seems to be truly relevant. That way you wouldn't be\nable to do stuff like this. We'd have to start explicitly tracking the\nnumber of pages that were newly set in the VM in vacuumlazy.c to be\nable to do that, but that seems like a good idea anyway.\n\nThis probably has consequences elsewhere, but maybe that's okay. We\nknow when the existing pg_class has no information, since that is\nexplicitly encoded by a reltuples of -1. Obviously we'd need to be\ncareful about stuff like that. Overall, the danger from assuming that\n\"unsettled\" pages (pages that couldn't be newly set all-visible by\nVACUUM) have a generic tuple density seems less than the danger of\nassuming that they're representative. We know that we're bound to scan\nthese same pages in the next VACUUM anyway, so they'll have another\nchance to impact our view of the table's tuple density at that time.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 15:28:19 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 3:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The problems in this area tend to be that vac_estimate_reltuples()\n> behaves as if it sees a random sample, when in fact it's far from\n> random -- it's the same scanned_pages as last time, and the ten other\n> times before that. That's a feature of your test case, and other\n> similar vac_estimate_reltuples test cases that I came up with in the\n> past. Both scenarios involved skipping using the visibility map in\n> multiple successive VACUUM operations.\n\nFWIW, the problem in your test case just goes away if you just change this line:\n\nDELETE FROM foo WHERE i < (10000000 * 0.1)\n\nTo this:\n\nDELETE FROM foo WHERE i < (10000000 * 0.065)\n\nWhich is not a huge difference, overall. This effect is a consequence\nof the heuristics I added in commit 74388a1a, so it's only present on\nPostgres 15+.\n\nWhether or not this is sufficient protection is of course open to\ninterpretation. One problem with those heuristics (as far as your test\ncase is concerned) is that they either work, or they don't work. For\nexample they're conditioned on \"old_rel_pages == total_page\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 15:59:00 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 14:37:20 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 2:22 PM Andres Freund <andres@anarazel.de> wrote:\n> > The problem with the change is here:\n> >\n> > /*\n> > * Okay, we've covered the corner cases. The normal calculation is to\n> > * convert the old measurement to a density (tuples per page), then\n> > * estimate the number of tuples in the unscanned pages using that figure,\n> > * and finally add on the number of tuples in the scanned pages.\n> > */\n> > old_density = old_rel_tuples / old_rel_pages;\n> > unscanned_pages = (double) total_pages - (double) scanned_pages;\n> > total_tuples = old_density * unscanned_pages + scanned_tuples;\n> > return floor(total_tuples + 0.5);\n> \n> My assumption has always been that vac_estimate_reltuples() is prone\n> to issues like this because it just doesn't have access to very much\n> information each time it runs. It can only see the delta between what\n> VACUUM just saw, and what the last VACUUM (or possibly the last\n> ANALYZE) saw according to pg_class. You're always going to find\n> weaknesses in such a model if you go looking for them. You're always\n> going to find a way to salami slice your way from good information to\n> total nonsense, if you pick the right/wrong test case, which runs\n> VACUUM in a way that allows whatever bias there may be to accumulate.\n> It's sort of like the way floating point values can become very\n> inaccurate through a process that allows many small inaccuracies to\n> accumulate over time.\n\nSure. To start with, there's always going to be some inaccuracies when you\nassume an even distribution across a table. But I think this goes beyond\nthat.\n\nThis problem occurs with a completely even distribution, exactly the same\ninputs to the estimation function every time. My example under-sold the\nseverity, because I had only 5% non-deletable tuples. Here's it with 50%\nnon-removable tuples (I've seen way worse than 50% in many real-world cases),\nand a bunch of complexity removed (attched).\n\nvacuum-no\treltuples/n_live_tup\tn_dead_tup\n1\t\t4999976\t\t\t5000000\n2\t\t2500077\t\t\t5000000\n3\t\t1250184\t\t\t5000000\n4\t\t 625266\t\t\t5000000\n5\t\t 312821\t\t\t5000000\n10\t\t 10165\t\t\t5000000\n\nEach vacuum halves reltuples. That's going to screw badly with all kinds of\nthings. Planner costs completely out of whack etc.\n\n\n\nI wonder if this is part of the reason for the distortion you addressed with\n74388a1a / 3097bde7dd1d. I am somewhat doubtful they're right as is. For a\nlarge relation 2% of blocks is a significant number of rows, and simply never\nadjusting reltuples seems quite problematic. At the very least we ought to\naccount for dead tids we removed or such, instead of just freezing reltuples.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 18 Jan 2023 16:02:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 4:02 PM Andres Freund <andres@anarazel.de> wrote:\n> vacuum-no reltuples/n_live_tup n_dead_tup\n> 1 4999976 5000000\n> 2 2500077 5000000\n> 3 1250184 5000000\n> 4 625266 5000000\n> 5 312821 5000000\n> 10 10165 5000000\n>\n> Each vacuum halves reltuples. That's going to screw badly with all kinds of\n> things. Planner costs completely out of whack etc.\n\nI get that that could be a big problem, even relative to the more\nimmediate problem of VACUUM just spinning like it does in your test\ncase. What do you think we should do about it? What do you think about\nmy idea of focussing on the subset of pages newly set all-visible in\nthe VM?\n\n> I wonder if this is part of the reason for the distortion you addressed with\n> 74388a1a / 3097bde7dd1d. I am somewhat doubtful they're right as is. For a\n> large relation 2% of blocks is a significant number of rows, and simply never\n> adjusting reltuples seems quite problematic. At the very least we ought to\n> account for dead tids we removed or such, instead of just freezing reltuples.\n\nAs I mentioned, it only kicks in when relpages is *precisely* the same\nas last time (not one block more or one block less), *and* we only\nscanned less than 2% of rel_pages. It's quite possible that that's\ninsufficient, but I can't imagine it causing any new problems.\n\nI think that we need to be realistic about what's possible while\nstoring a small, fixed amount of information. There is always going to\nbe some distortion of this kind. We can do something about the\nobviously pathological cases, where errors can grow without bound. But\nyou have to draw the line somewhere, unless you're willing to replace\nthe whole approach with something that stores historic metadata.\n\nWhat kind of tradeoff do you want to make here? I think that you have\nto make one.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 16:19:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 13:45:19 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 1:08 PM Andres Freund <andres@anarazel.de> wrote:\n> > I suggested nearby to only have ANALYZE dead_tuples it if there's been no\n> > [auto]vacuum since the stats entry was created. That allows recovering from\n> > stats resets, be it via crashes or explicitly. What do you think?\n>\n> I like that idea. It's far simpler than the kind of stuff I was\n> thinking about, and probably just as effective. Even if it introduces\n> some unforeseen problem (which seems unlikely), we can still rely on\n> pgstat_report_vacuum() to set things straight before too long.\n>\n> Are you planning on writing a patch for this? I'd be very interested\n> in seeing this through. Could definitely review it.\n\nI can, it should be just about trivial code-wise. A bit queasy about trying to\nforsee the potential consequences.\n\n\nA somewhat related issue is that pgstat_report_vacuum() sets dead_tuples to\nwhat VACUUM itself observed, ignoring any concurrently reported dead\ntuples. As far as I can tell, when vacuum takes a long time, that can lead to\nseverely under-accounting dead tuples.\n\nWe probably loose track of a bit more than 50% of the dead tuples reported\nsince vacuum started. During the heap scan phase we don't notice all the\ntuples reported before the current scan point, and then we don't notice them\nat all during the index/heap vacuuming.\n\nThe only saving grace is that it'll be corrected at the next VACUUM. But the\nnext vacuum might very well be delayed noticably due to this.\n\n\nThis is similar to the issue around ins_since_vacuum that Peter pointed out.\n\n\nI wonder if we ought to remember the dead_tuples value at the start of the\nheap scan and use that to adjust the final dead_tuples value. I'd lean towards\nover-counting rather than under-counting and thus probably would go for\nsomething like\n\n tabentry->dead_tuples = livetuples + Min(0, tabentry->dead_tuples - deadtuples_at_start);\n\ni.e. assuming we might have missed all concurrently reported dead tuples.\n\n\n\n\nOf course we could instead move to something like ins_since_vacuum and reset\nit at the *start* of the vacuum. But that'd make the error case harder,\nwithout giving us more accuracy, I think?\n\n\nI do think this is an argument for splitting up dead_tuples into separate\n\"components\" that we track differently. I.e. tracking the number of dead\nitems, not-yet-removable rows, and the number of dead tuples reported from DML\nstatements via pgstats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Jan 2023 16:37:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 4:37 PM Andres Freund <andres@anarazel.de> wrote:\n> I can, it should be just about trivial code-wise. A bit queasy about trying to\n> forsee the potential consequences.\n\nThat's always going to be true, though.\n\n> A somewhat related issue is that pgstat_report_vacuum() sets dead_tuples to\n> what VACUUM itself observed, ignoring any concurrently reported dead\n> tuples. As far as I can tell, when vacuum takes a long time, that can lead to\n> severely under-accounting dead tuples.\n\nDid I not mention that one? There are so many that it can be hard to\nkeep track! That's why I catalog them.\n\nAs you point out, it's the dead tuples equivalent of my\nins_since_vacuum complaint. The problem is exactly analogous to my\nrecent complaint about insert-driven autovacuums.\n\n> I wonder if we ought to remember the dead_tuples value at the start of the\n> heap scan and use that to adjust the final dead_tuples value. I'd lean towards\n> over-counting rather than under-counting and thus probably would go for\n> something like\n>\n> tabentry->dead_tuples = livetuples + Min(0, tabentry->dead_tuples - deadtuples_at_start);\n>\n> i.e. assuming we might have missed all concurrently reported dead tuples.\n\nThis is exactly what I was thinking of doing for both issues (the\nins_since_vacuum one and this similar dead tuples one). It's\ncompletely logical.\n\nThis creates an awkward but logical question, though: what if\ndead_tuples doesn't go down at all? What if VACUUM actually has to\nincrease it, because VACUUM runs so slowly relative to the workload?\nOf course the whole point is to make it more likely that VACUUM will\nkeep up with the workload. I'm just not quite sure that the\nconsequences of doing it that way are strictly a good thing. Bearing\nin mind that we don't differentiate between recently dead and dead\nhere.\n\nFun fact: autovacuum can spin with pgbench because of recently dead\ntuples, even absent an old snapshot/long running xact, if you set\nthings aggressively enough:\n\nhttps://postgr.es/m/CAH2-Wz=sJm3tm+FpXbyBhEhX5tbz1trQrhG6eOhYk4-+5uL=ww@mail.gmail.com\n\nI think that we probably need to do something like always make sure\nthat dead_items goes down by a small amount at the end of each VACUUM,\neven when that's a lie. Maybe we also have a log message about\nautovacuum not keeping up, so as to not feel too guilty about it. You\nknow, to give the user a chance to reconfigure autovacuum so that it\nstops happening.\n\n> Of course we could instead move to something like ins_since_vacuum and reset\n> it at the *start* of the vacuum. But that'd make the error case harder,\n> without giving us more accuracy, I think?\n\nIt would. It seems illogical to me.\n\n> I do think this is an argument for splitting up dead_tuples into separate\n> \"components\" that we track differently. I.e. tracking the number of dead\n> items, not-yet-removable rows, and the number of dead tuples reported from DML\n> statements via pgstats.\n\nIs it? Why?\n\nI'm all in favor of doing that, of course. I just don't particularly\nthink that it's related to this other problem. One problem is that we\ncount dead tuples incorrectly because we don't account for the fact\nthat things change while VACUUM runs. The other problem is that the\nthing that is counted isn't broken down into distinct subcategories of\nthings -- things are bunched together that shouldn't be.\n\nOh wait, you were thinking of what I said before -- my \"awkward but\nlogical question\". Is that it?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 17:00:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 16:19:02 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 4:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > vacuum-no reltuples/n_live_tup n_dead_tup\n> > 1 4999976 5000000\n> > 2 2500077 5000000\n> > 3 1250184 5000000\n> > 4 625266 5000000\n> > 5 312821 5000000\n> > 10 10165 5000000\n> >\n> > Each vacuum halves reltuples. That's going to screw badly with all kinds of\n> > things. Planner costs completely out of whack etc.\n>\n> I get that that could be a big problem, even relative to the more\n> immediate problem of VACUUM just spinning like it does in your test\n> case. What do you think we should do about it?\n\nThe change made in 7c91a0364fc imo isn't right. We need to fix it. I think\nit's correct that now pgstat_report_vacuum() doesn't include recently dead\ntuples in livetuples - they're still tracked via deadtuples. But it's wrong\nfor vacuum's call to vac_update_relstats() to not include recently dead\ntuples, at least when we only scanned part of the relation.\n\nI think the right thing would be to not undo the semantic change of\n7c91a0364fc as a whole, but instead take recently-dead tuples into account\nonly in the \"Okay, we've covered the corner cases.\" part, to avoid the\nspiraling seen above.\n\nNot super clean, but it seems somewhat fundamental that we'll re-scan pages\nfull of recently-dead tuples in the near future. If we, in a way, subtract\nthe recently dead tuples from reltuples in this cycle, we shouldn't do so\nagain in the next - but not taking recently dead into account, does so.\n\n\nIt's a bit complicated because of the APIs involved. vac_estimate_reltuples()\ncomputes vacrel->new_live_tuples and contains the logic for how to compute the\nnew reltuples. But we use the ->new_live_tuples both vac_update_relstats(),\nwhere we, imo, should take recently-dead into account for partial scans and\npgstat_report_vacuum where we shouldn't. I guess we would need to add an\noutput paramter both for \"reltuples\" and \"new_live_tuples\".\n\n\n\n\n\n\n\n> What do you think about my idea of focussing on the subset of pages newly\n> set all-visible in the VM?\n\nI don't understand it yet.\n\nOn 2023-01-18 15:28:19 -0800, Peter Geoghegan wrote:\n> Perhaps we should make vac_estimate_reltuples focus on the pages that\n> VACUUM newly set all-visible each time (not including all-visible\n> pages that got scanned despite being all-visible) -- only that subset\n> of scanned_pages seems to be truly relevant. That way you wouldn't be\n> able to do stuff like this. We'd have to start explicitly tracking the\n> number of pages that were newly set in the VM in vacuumlazy.c to be\n> able to do that, but that seems like a good idea anyway.\n\nCan you explain a bit more what you mean with \"focus on the pages\" means?\n\n\n\n> > I wonder if this is part of the reason for the distortion you addressed with\n> > 74388a1a / 3097bde7dd1d. I am somewhat doubtful they're right as is. For a\n> > large relation 2% of blocks is a significant number of rows, and simply never\n> > adjusting reltuples seems quite problematic. At the very least we ought to\n> > account for dead tids we removed or such, instead of just freezing reltuples.\n>\n> As I mentioned, it only kicks in when relpages is *precisely* the same\n> as last time (not one block more or one block less), *and* we only\n> scanned less than 2% of rel_pages. It's quite possible that that's\n> insufficient, but I can't imagine it causing any new problems.\n\nIn OLTP workloads relpages will often not change, even if there's lots of\nwrite activity, because there's plenty free space in the relation, and there's\nsomething not-removable on the last page. relpages also won't change if data\nis deleted anywhere but the end.\n\nI don't think it's hard to see this causing problems. Set\nautovacuum_vacuum_scale_factor to something smaller than 2% or somewhat\nfrequently vacuum manually. Incrementally delete old data. Unless analyze\nsaves you - which might not be run or might have a different scale factor or\nnot be run manually - reltuples will stay exactly the same, despite data\nchanging substantially.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Jan 2023 17:49:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> How about a float autovacuum_no_auto_cancel_age where positive values are\n> treated as absolute values, and negative values are a multiple of\n> autovacuum_freeze_max_age? And where the \"computed\" age is capped at\n> vacuum_failsafe_age? A \"failsafe\" autovacuum clearly shouldn't be cancelled.\n>\n> And maybe a default setting of -1.8 or so?\n\nAttached is a new revision, v5. I'm not happy with this, but thought\nit would be useful to show you where I am with it.\n\nIt's a bit awkward that we have a GUC (autovacuum_no_auto_cancel_age)\nthat can sometimes work as a cutoff that works similarly to both\nfreeze_max_age and multixact_freeze_max_age, but usually works as a\nmultiplier. It's both an XID age value, an MXID age value, and a\nmultiplier on XID/MXID age values.\n\nWhat if it was just a simple multiplier on\nfreeze_max_age/multixact_freeze_max_age, without changing any other\ndetail?\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 18 Jan 2023 17:54:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 17:00:48 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 4:37 PM Andres Freund <andres@anarazel.de> wrote:\n> > I can, it should be just about trivial code-wise. A bit queasy about trying to\n> > forsee the potential consequences.\n> \n> That's always going to be true, though.\n> \n> > A somewhat related issue is that pgstat_report_vacuum() sets dead_tuples to\n> > what VACUUM itself observed, ignoring any concurrently reported dead\n> > tuples. As far as I can tell, when vacuum takes a long time, that can lead to\n> > severely under-accounting dead tuples.\n> \n> Did I not mention that one? There are so many that it can be hard to\n> keep track! That's why I catalog them.\n\nI don't recall you doing, but there's lot of emails and holes in my head.\n\n\n> This creates an awkward but logical question, though: what if\n> dead_tuples doesn't go down at all? What if VACUUM actually has to\n> increase it, because VACUUM runs so slowly relative to the workload?\n\nSure, that can happen - but it's not made better by having wrong stats :)\n\n\n> > I do think this is an argument for splitting up dead_tuples into separate\n> > \"components\" that we track differently. I.e. tracking the number of dead\n> > items, not-yet-removable rows, and the number of dead tuples reported from DML\n> > statements via pgstats.\n> \n> Is it? Why?\n\nWe have reasonably sophisticated accounting in pgstats what newly live/dead\nrows a transaction \"creates\". So an obvious (and wrong) idea is just decrement\nreltuples by the number of tuples removed by autovacuum. But we can't do that,\nbecause inserted/deleted tuples reported by backends can be removed by\non-access pruning and vacuumlazy doesn't know about all changes made by its\ncall to heap_page_prune().\n\nBut I think that if we add a\n pgstat_count_heap_prune(nredirected, ndead, nunused)\naround heap_page_prune() and a\n pgstat_count_heap_vacuum(nunused)\nin lazy_vacuum_heap_page(), we'd likely end up with a better approximation\nthan what vac_estimate_reltuples() does, in the \"partially scanned\" case.\n\n\n\n> I'm all in favor of doing that, of course. I just don't particularly\n> think that it's related to this other problem. One problem is that we\n> count dead tuples incorrectly because we don't account for the fact\n> that things change while VACUUM runs. The other problem is that the\n> thing that is counted isn't broken down into distinct subcategories of\n> things -- things are bunched together that shouldn't be.\n\nIf we only adjust the counters incrementally, as we go, we'd not update them\nat the end of vacuum. I think it'd be a lot easier to only update the counters\nincrementally if we split ->dead_tuples into sub-counters.\n\nSo I don't think it's entirely unrelated.\n\nYou probably could get close without splitting the counters, by just pushing\ndown the counting, and only counting redirected and unused during heap\npruning. But I think it's likely to be more accurate with the split counter.\n\n\n\n> Oh wait, you were thinking of what I said before -- my \"awkward but\n> logical question\". Is that it?\n\nI'm not quite following? The \"awkward but logical\" bit is in the email I'm\njust replying to, right?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:10:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 5:49 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-18 15:28:19 -0800, Peter Geoghegan wrote:\n> > Perhaps we should make vac_estimate_reltuples focus on the pages that\n> > VACUUM newly set all-visible each time (not including all-visible\n> > pages that got scanned despite being all-visible) -- only that subset\n> > of scanned_pages seems to be truly relevant. That way you wouldn't be\n> > able to do stuff like this. We'd have to start explicitly tracking the\n> > number of pages that were newly set in the VM in vacuumlazy.c to be\n> > able to do that, but that seems like a good idea anyway.\n>\n> Can you explain a bit more what you mean with \"focus on the pages\" means?\n\nWe don't say anything about pages we didn't scan right now -- only\nscanned_pages have new information, so we just extrapolate. Why not go\neven further than that, by only saying something about the pages that\nwere both scanned and newly set all-visible?\n\nUnder this scheme, the pages that were scanned but couldn't be newly\nset all-visible are treated just like the pages that weren't scanned\nat all -- they get a generic estimate from the existing reltuples.\n\n> I don't think it's hard to see this causing problems. Set\n> autovacuum_vacuum_scale_factor to something smaller than 2% or somewhat\n> frequently vacuum manually. Incrementally delete old data. Unless analyze\n> saves you - which might not be run or might have a different scale factor or\n> not be run manually - reltuples will stay exactly the same, despite data\n> changing substantially.\n\nYou seem to be saying that it's a problem if we don't update reltuples\n-- an estimate -- when less than 2% of the table is scanned by VACUUM.\nBut why? Why can't we just do nothing sometimes? I mean in general,\nleaving aside the heuristics I came up with for a moment?\n\nIt will cause problems if we remove the heuristics. Much less\ntheoretical problems. What about those?\n\nHow often does VACUUM scan so few pages, anyway? We've been talking\nabout how ineffective autovacuum_vacuum_scale_factor is, at great\nlength, but now you're saying that it *can* meaningfully trigger not\njust one VACUUM, but many VACUUMs, where no more than 2% of rel_pages\nare not all-visible (pages, not tuples)? Not just once, mind you, but\nmany times? And in the presence of some kind of highly variable tuple\nsize, where it actually could matter to the planner at some point?\n\nI would be willing to just avoid even these theoretical problems if\nthere was some way to do so, that didn't also create new problems. I\nhave my doubts that that is possible, within the constraints of\nupdating pg_class. Or the present constraints, at least. I am not a\nmiracle worker -- I can only do so much with the information that's\navailable to vac_update_relstats (and/or the information that can\neasily be made available).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:21:33 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 6:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > This creates an awkward but logical question, though: what if\n> > dead_tuples doesn't go down at all? What if VACUUM actually has to\n> > increase it, because VACUUM runs so slowly relative to the workload?\n>\n> Sure, that can happen - but it's not made better by having wrong stats :)\n\nMaybe it's that simple. It's reasonable to wonder how far we want to\ngo with letting dead tuples grow and grow, even when VACUUM is running\nconstantly. It's not a question that has an immediate and obvious\nanswer IMV.\n\nMaybe the real question is: is this an opportunity to signal to the\nuser (say via a LOG message) that VACUUM cannot keep up? That might be\nvery useful, in a general sort of way (not just to avoid new\nproblems).\n\n> We have reasonably sophisticated accounting in pgstats what newly live/dead\n> rows a transaction \"creates\". So an obvious (and wrong) idea is just decrement\n> reltuples by the number of tuples removed by autovacuum.\n\nDid you mean dead_tuples?\n\n> But we can't do that,\n> because inserted/deleted tuples reported by backends can be removed by\n> on-access pruning and vacuumlazy doesn't know about all changes made by its\n> call to heap_page_prune().\n\nI'm not following here. Perhaps this is a good sign that I should stop\nworking for the day. :-)\n\n> But I think that if we add a\n> pgstat_count_heap_prune(nredirected, ndead, nunused)\n> around heap_page_prune() and a\n> pgstat_count_heap_vacuum(nunused)\n> in lazy_vacuum_heap_page(), we'd likely end up with a better approximation\n> than what vac_estimate_reltuples() does, in the \"partially scanned\" case.\n\nWhat does vac_estimate_reltuples() have to do with dead tuples?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:46:55 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 18:21:33 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 5:49 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-01-18 15:28:19 -0800, Peter Geoghegan wrote:\n> > > Perhaps we should make vac_estimate_reltuples focus on the pages that\n> > > VACUUM newly set all-visible each time (not including all-visible\n> > > pages that got scanned despite being all-visible) -- only that subset\n> > > of scanned_pages seems to be truly relevant. That way you wouldn't be\n> > > able to do stuff like this. We'd have to start explicitly tracking the\n> > > number of pages that were newly set in the VM in vacuumlazy.c to be\n> > > able to do that, but that seems like a good idea anyway.\n> >\n> > Can you explain a bit more what you mean with \"focus on the pages\" means?\n>\n> We don't say anything about pages we didn't scan right now -- only\n> scanned_pages have new information, so we just extrapolate. Why not go\n> even further than that, by only saying something about the pages that\n> were both scanned and newly set all-visible?\n>\n> Under this scheme, the pages that were scanned but couldn't be newly\n> set all-visible are treated just like the pages that weren't scanned\n> at all -- they get a generic estimate from the existing reltuples.\n\nI don't think that'd work well either. If we actually removed a large chunk of\nthe tuples in the table it should be reflected in reltuples, otherwise costing\nand autovac scheduling suffers. And you might not be able to set all those\npage to all-visible, because of more recent rows.\n\n\n> > I don't think it's hard to see this causing problems. Set\n> > autovacuum_vacuum_scale_factor to something smaller than 2% or somewhat\n> > frequently vacuum manually. Incrementally delete old data. Unless analyze\n> > saves you - which might not be run or might have a different scale factor or\n> > not be run manually - reltuples will stay exactly the same, despite data\n> > changing substantially.\n>\n> You seem to be saying that it's a problem if we don't update reltuples\n> -- an estimate -- when less than 2% of the table is scanned by VACUUM.\n> But why? Why can't we just do nothing sometimes? I mean in general,\n> leaving aside the heuristics I came up with for a moment?\n\nThe problem isn't that we might apply the heuristic once, that'd be fine. But\nthat there's nothing preventing it from applying until there basically are no\ntuples left, as long as the vacuum is frequent enough.\n\nAs a demo: The attached sql script ends up with a table containing 10k rows,\nbut relpages being set 1 million.\n\nVACUUM foo;\nEXPLAIN (ANALYZE) SELECT * FROM foo;\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Seq Scan on foo (cost=0.00..14425.00 rows=1000000 width=4) (actual time=3.251..4.693 rows=10000 loops=1) │\n│ Planning Time: 0.056 ms │\n│ Execution Time: 5.491 ms │\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(3 rows)\n\n\n> It will cause problems if we remove the heuristics. Much less\n> theoretical problems. What about those?\n\nI don't think what I describe is a theoretical problem.\n\n\n> How often does VACUUM scan so few pages, anyway? We've been talking\n> about how ineffective autovacuum_vacuum_scale_factor is, at great\n> length, but now you're saying that it *can* meaningfully trigger not\n> just one VACUUM, but many VACUUMs, where no more than 2% of rel_pages\n> are not all-visible (pages, not tuples)? Not just once, mind you, but\n> many times?\n\nI've seen autovacuum_vacuum_scale_factor set to 0.01 repeatedly. But that's\nnot even needed - you just need a longrunning transaction preventing at least\none dead row from getting removed and hit autovacuum_freeze_max_age. There'll\nbe continuous VACUUMs of the table, all only processing a small fraction of\nthe table.\n\nAnd I have many times seen bulk loading / deletion scripts that do VACUUM on a\nregular schedule, which also could easily trigger this.\n\n\n> And in the presence of some kind of highly variable tuple\n> size, where it actually could matter to the planner at some point?\n\nI don't see how a variable tuple size needs to be involved? As the EXPLAIN\nANALYZE above shows, we'll end up with wrong row count estimates etc.\n\n\n> I would be willing to just avoid even these theoretical problems if\n> there was some way to do so, that didn't also create new problems. I\n> have my doubts that that is possible, within the constraints of\n> updating pg_class. Or the present constraints, at least. I am not a\n> miracle worker -- I can only do so much with the information that's\n> available to vac_update_relstats (and/or the information that can\n> easily be made available).\n\nI'm worried they might cause new problems.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 18 Jan 2023 19:04:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 7:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > You seem to be saying that it's a problem if we don't update reltuples\n> > -- an estimate -- when less than 2% of the table is scanned by VACUUM.\n> > But why? Why can't we just do nothing sometimes? I mean in general,\n> > leaving aside the heuristics I came up with for a moment?\n>\n> The problem isn't that we might apply the heuristic once, that'd be fine. But\n> that there's nothing preventing it from applying until there basically are no\n> tuples left, as long as the vacuum is frequent enough.\n>\n> As a demo: The attached sql script ends up with a table containing 10k rows,\n> but relpages being set 1 million.\n\nI saw that too. But then I checked again a few seconds later, and\nautoanalyze had run, so reltuples was 10k. Just like it would have if\nthere was no VACUUM statements in your script.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 Jan 2023 19:26:22 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> pgstat_report_analyze() will totally override the\n> tabentry->dead_tuples information that drives autovacuum.c, based on\n> an estimate derived from a random sample -- which seems to me to be an\n> approach that just doesn't have any sound theoretical basis.\n\nIn other words, ANALYZE sometimes (but not always) produces wrong answers.\n\nOn Wed, Jan 18, 2023 at 4:08 PM Andres Freund <andres@anarazel.de> wrote:\n> One complicating factor is that VACUUM sometimes computes an incrementally\n> more bogus n_live_tup when it skips pages due to the VM, whereas ANALYZE\n> computes something sane. I unintentionally encountered one when I was trying\n> something while writing this email, reproducer attached.\n\nIn other words, VACUUM sometimes (but not always) produces wrong answers.\n\nTL;DR: We're screwed.\n\nI refuse to believe that any amount of math you can do on numbers that\ncan be arbitrarily inaccurate will result in an accurate answer\npopping out the other end. Trying to update the reltuples estimate\nincrementally based on an estimate derived from a non-random,\nlikely-to-be-skewed subset of the table is always going to produce\ndistortion that gets worse and worse the more times you do it. If\ncould say, well, the existing estimate of let's say 100 tuples per\npage is based on the density being 200 tuples per page in the pages I\njust scanned and 50 tuples per page in the rest of the table, then you\ncould calculate a new estimate that keeps the value of 50 tuples per\npage for the remainder of the table intact and just replaces the\nestimate for the part you just scanned. But we have no way of doing\nthat, so we just make some linear combination of the old estimate with\nthe new one. That overweights the repeatedly-sampled portions of the\ntable more and more, making the estimate wronger and wronger.\n\nNow, that is already quite bad. But if we accept the premise that\nneither VACUUM nor ANALYZE is guaranteed to ever produce a new\nactually-reliable estimate, then not only will we go progressively\nmore wrong as time goes by, but we have no way of ever fixing\nanything. If you get a series of unreliable data points followed by a\nreliable data point, you can at least get back on track when the\nreliable data shows up. But it sounds like you guys are saying that\nthere's no guarantee that will ever happen, which is a bit like\ndiscovering that not only do you have a hole in your gas tank but\nthere is no guarantee that you will arrive at a gas station ever again\nregardless of distance travelled.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Jan 2023 15:12:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-19 15:12:12 -0500, Robert Haas wrote:\n> On Wed, Jan 18, 2023 at 1:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > pgstat_report_analyze() will totally override the\n> > tabentry->dead_tuples information that drives autovacuum.c, based on\n> > an estimate derived from a random sample -- which seems to me to be an\n> > approach that just doesn't have any sound theoretical basis.\n> \n> In other words, ANALYZE sometimes (but not always) produces wrong answers.\n\nFor dead tuples, but not live tuples.\n\n\n> On Wed, Jan 18, 2023 at 4:08 PM Andres Freund <andres@anarazel.de> wrote:\n> > One complicating factor is that VACUUM sometimes computes an incrementally\n> > more bogus n_live_tup when it skips pages due to the VM, whereas ANALYZE\n> > computes something sane. I unintentionally encountered one when I was trying\n> > something while writing this email, reproducer attached.\n> \n> In other words, VACUUM sometimes (but not always) produces wrong answers.\n\nFor live tuples, but not badly so for dead tuples.\n\n\n> TL;DR: We're screwed.\n\nWe are, but perhaps not too badly so, because we can choose to believe analyze\nmore for live tuples, and vacuum for dead tuples. Analyze doesn't compute\nreltuples incrementally and vacuum doesn't compute deadtuples incrementally.\n\n\n\n> I refuse to believe that any amount of math you can do on numbers that\n> can be arbitrarily inaccurate will result in an accurate answer\n> popping out the other end. Trying to update the reltuples estimate\n> incrementally based on an estimate derived from a non-random,\n> likely-to-be-skewed subset of the table is always going to produce\n> distortion that gets worse and worse the more times you do it. If\n> could say, well, the existing estimate of let's say 100 tuples per\n> page is based on the density being 200 tuples per page in the pages I\n> just scanned and 50 tuples per page in the rest of the table, then you\n> could calculate a new estimate that keeps the value of 50 tuples per\n> page for the remainder of the table intact and just replaces the\n> estimate for the part you just scanned. But we have no way of doing\n> that, so we just make some linear combination of the old estimate with\n> the new one. That overweights the repeatedly-sampled portions of the\n> table more and more, making the estimate wronger and wronger.\n\nPerhaps we should, at least occasionally, make vacuum do a cheaper version of\nanalyze's sampling to compute an updated reltuples. This could even happen\nduring the heap scan phase.\n\nI don't like relying on analyze to fix vacuum's bogus reltuples, because\nthere's nothing forcing an analyze run soon after vacuum [incrementally]\nscrewed it up. Vacuum can be forced to run a lot of times due to xid horizons\npreventing cleanup, after which there isn't anything forcing analyze to run\nagain.\n\nBut in contrast to dead_tuples, where I think we can just stop analyze from\nupdating it unless we crashed recently, I do think we need to update reltuples\nin vacuum. So computing an accurate value seems like the least unreasonable\nthing I can see.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:56:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi\n\nOn 2023-01-18 19:26:22 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 18, 2023 at 7:04 PM Andres Freund <andres@anarazel.de> wrote:\n> > > You seem to be saying that it's a problem if we don't update reltuples\n> > > -- an estimate -- when less than 2% of the table is scanned by VACUUM.\n> > > But why? Why can't we just do nothing sometimes? I mean in general,\n> > > leaving aside the heuristics I came up with for a moment?\n> >\n> > The problem isn't that we might apply the heuristic once, that'd be fine. But\n> > that there's nothing preventing it from applying until there basically are no\n> > tuples left, as long as the vacuum is frequent enough.\n> >\n> > As a demo: The attached sql script ends up with a table containing 10k rows,\n> > but relpages being set 1 million.\n> \n> I saw that too. But then I checked again a few seconds later, and\n> autoanalyze had run, so reltuples was 10k. Just like it would have if\n> there was no VACUUM statements in your script.\n\nThere's absolutely no guarantee that autoanalyze is triggered\nthere. Particularly with repeated vacuums triggered due to an relfrozenxid age\nthat can't be advanced that very well might not happen within days on a large\nrelation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:58:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 12:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > In other words, ANALYZE sometimes (but not always) produces wrong answers.\n>\n> For dead tuples, but not live tuples.\n\n> > In other words, VACUUM sometimes (but not always) produces wrong answers.\n>\n> For live tuples, but not badly so for dead tuples.\n\nAgreed. More generally, there is a need to think about the whole table\nin some cases (like for regular optimizer statistics including\nreltuples/live tuples), and the subset of pages that will be scanned\nby VACUUM in other cases (for dead tuples accounting). Random sampling\nat the table level is appropriate and works well enough for the\nformer, though not for the latter.\n\n> We are, but perhaps not too badly so, because we can choose to believe analyze\n> more for live tuples, and vacuum for dead tuples. Analyze doesn't compute\n> reltuples incrementally and vacuum doesn't compute deadtuples incrementally.\n\nGood points.\n\n> But in contrast to dead_tuples, where I think we can just stop analyze from\n> updating it unless we crashed recently, I do think we need to update reltuples\n> in vacuum. So computing an accurate value seems like the least unreasonable\n> thing I can see.\n\nI agree, but there is no reasonable basis for treating scanned_pages\nas a random sample, especially if it's only a small fraction of all of\nrel_pages -- treating it as a random sample is completely\nunjustifiable. And so it seems to me that the only thing that can be\ndone is to either make VACUUM behave somewhat like ANALYZE in at least\nsome cases, or to have it invoke ANALYZE directly (or indirectly) in\nthose same cases.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 19 Jan 2023 13:22:28 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 12:58 PM Andres Freund <andres@anarazel.de> wrote:\n> There's absolutely no guarantee that autoanalyze is triggered\n> there. Particularly with repeated vacuums triggered due to an relfrozenxid age\n> that can't be advanced that very well might not happen within days on a large\n> relation.\n\nArguments like that work far better as arguments in favor of the\nvac_estimate_reltuples heuristics.\n\nThat doesn't mean that the heuristics are good in any absolute sense,\nof course. They were just a band aid intended to ameliorate some of\nthe negative impact that came from treating scanned_pages as a random\nsample. I think that we both agree that the real problem is that\nscanned_pages just isn't a random sample, at least not as far as\nreltuples/live tuples is concerned (for dead tuples it kinda isn't a\nsample, but is rather something close to an exact count).\n\nI now understand that you're in favor of addressing the root problem\ndirectly. I am also in favor of that approach. I'd be more than happy\nto get rid of the band aid as part of that whole effort.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 19 Jan 2023 13:36:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-19 13:36:41 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 19, 2023 at 12:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > There's absolutely no guarantee that autoanalyze is triggered\n> > there. Particularly with repeated vacuums triggered due to an relfrozenxid age\n> > that can't be advanced that very well might not happen within days on a large\n> > relation.\n> \n> Arguments like that work far better as arguments in favor of the\n> vac_estimate_reltuples heuristics.\n\nI don't agree. But mainly my issue is that the devil you know (how this has\nworked for a while) is preferrable to introducing an unknown quantity (your\npatch that hasn't yet seen real world exposure).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Jan 2023 14:51:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-19 13:22:28 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 19, 2023 at 12:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > But in contrast to dead_tuples, where I think we can just stop analyze from\n> > updating it unless we crashed recently, I do think we need to update reltuples\n> > in vacuum. So computing an accurate value seems like the least unreasonable\n> > thing I can see.\n> \n> I agree, but there is no reasonable basis for treating scanned_pages\n> as a random sample, especially if it's only a small fraction of all of\n> rel_pages -- treating it as a random sample is completely\n> unjustifiable.\n\nAgreed.\n\n\n> And so it seems to me that the only thing that can be done is to either make\n> VACUUM behave somewhat like ANALYZE in at least some cases, or to have it\n> invoke ANALYZE directly (or indirectly) in those same cases.\n\nYea. Hence my musing about potentially addressing this by choosing to visit\nadditional blocks during the heap scan using vacuum's block sampling logic.\n\nIME most of the time in analyze isn't spent doing IO for the sample blocks\nthemselves, but CPU time and IO for toasted columns. A trimmed down version\nthat just computes relallvisible should be a good bit faster.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Jan 2023 14:54:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 2:54 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea. Hence my musing about potentially addressing this by choosing to visit\n> additional blocks during the heap scan using vacuum's block sampling logic.\n\nI'd rather just invent a way for vacuumlazy.c to tell the top-level\nvacuum.c caller \"I didn't update reltuples, but you ought to go\nANALYZE the table now that I'm done, even if you weren't already\nplanning to do so\". This wouldn't have to happen every time, but it\nwould happen fairly often.\n\n> IME most of the time in analyze isn't spent doing IO for the sample blocks\n> themselves, but CPU time and IO for toasted columns. A trimmed down version\n> that just computes relallvisible should be a good bit faster.\n\nI worry about that from a code maintainability point of view. I'm\nconcerned that it won't be very cut down at all, in the end.\n\nPresumably you'll want to add the same I/O prefetching logic to this\ncut-down version, just for example. Since without that there will be\nno competition between it and ANALYZE proper. Besides which, isn't it\nkinda wasteful to not just do a full ANALYZE? Sure, you can avoid\ndetoasting overhead that way. But even still.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 19 Jan 2023 15:10:38 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-19 15:10:38 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 19, 2023 at 2:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea. Hence my musing about potentially addressing this by choosing to visit\n> > additional blocks during the heap scan using vacuum's block sampling logic.\n> \n> I'd rather just invent a way for vacuumlazy.c to tell the top-level\n> vacuum.c caller \"I didn't update reltuples, but you ought to go\n> ANALYZE the table now that I'm done, even if you weren't already\n> planning to do so\".\n\nI'm worried about increasing the number of analyzes that much - on a subset of\nworkloads it's really quite slow.\n\n\nAnother version of this could be to integrate analyze.c's scan more closely\nwith vacuum all the time. It's a bit bonkers that we often sequentially read\nblocks, evict them from shared buffers if we read them, just to then\nafterwards do random IO for blocks we've already read. That's imo what we\neventually should do, but clearly it's not a small project.\n\n\n> This wouldn't have to happen every time, but it would happen fairly often.\n\nDo you have a mechanism for that in mind? Just something vacuum_count % 10 ==\n0 like? Or remember scanned_pages in pgstats and re-computing\n\n\n> > IME most of the time in analyze isn't spent doing IO for the sample blocks\n> > themselves, but CPU time and IO for toasted columns. A trimmed down version\n> > that just computes relallvisible should be a good bit faster.\n> \n> I worry about that from a code maintainability point of view. I'm\n> concerned that it won't be very cut down at all, in the end.\n\nI think it'd be fine to just use analyze.c and pass in an option to not\ncompute column and inheritance stats.\n\n\n> Presumably you'll want to add the same I/O prefetching logic to this\n> cut-down version, just for example. Since without that there will be\n> no competition between it and ANALYZE proper. Besides which, isn't it\n> kinda wasteful to not just do a full ANALYZE? Sure, you can avoid\n> detoasting overhead that way. But even still.\n\nIt's not just that analyze is expensive, I think it'll also be confusing if\nthe column stats change after a manual VACUUM without ANALYZE.\n\nIt shouldn't be too hard to figure out whether we're going to do an analyze\nanyway and not do the rowcount-estimate version when doing VACUUM ANALYZE or\nif autovacuum scheduled an analyze as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Jan 2023 15:38:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 3:38 PM Andres Freund <andres@anarazel.de> wrote:\n> Another version of this could be to integrate analyze.c's scan more closely\n> with vacuum all the time. It's a bit bonkers that we often sequentially read\n> blocks, evict them from shared buffers if we read them, just to then\n> afterwards do random IO for blocks we've already read. That's imo what we\n> eventually should do, but clearly it's not a small project.\n\nVery often, what we're really missing in VACUUM is high level context.\nThat's true of what you say here, about analyze.c, as well as\ncomplaints like your vac_estimate_reltuples complaint. The problem\nscenarios involving vac_estimate_reltuples all involve repeating the\nsame silly action again and again, never realizing that that's what's\ngoing on. I've found it very useful to think of one VACUUM as picking\nup where the last one left off for my work on freezing.\n\nThis seems related to pre-autovacuum historical details. VACUUM\nshouldn't really be a command in the same way that CREATE INDEX is a\ncommand. I do think that we need to retain a VACUUM command in some\nform, but it should be something pretty close to a command that just\nenqueues off-schedule autovacuums. That can do things like coalesce\nduplicate requests into one.\n\nAnyway, I am generally in favor of a design that makes VACUUM and\nANALYZE things that are more or less owned by autovacuum. It should be\nless and less of a problem to blur the distinction between VACUUM and\nANALYZE under this model, in each successive release. These\ndistinctions are quite unhelpful, overall, because they make it hard\nfor autovacuum scheduling to work at the whole-system level.\n\n> > This wouldn't have to happen every time, but it would happen fairly often.\n>\n> Do you have a mechanism for that in mind? Just something vacuum_count % 10 ==\n> 0 like? Or remember scanned_pages in pgstats and re-computing\n\nI was thinking of something very simple like that, yes.\n\n> I think it'd be fine to just use analyze.c and pass in an option to not\n> compute column and inheritance stats.\n\nThat could be fine. Just as long as it's not duplicative in an obvious way.\n\n> > Presumably you'll want to add the same I/O prefetching logic to this\n> > cut-down version, just for example. Since without that there will be\n> > no competition between it and ANALYZE proper. Besides which, isn't it\n> > kinda wasteful to not just do a full ANALYZE? Sure, you can avoid\n> > detoasting overhead that way. But even still.\n>\n> It's not just that analyze is expensive, I think it'll also be confusing if\n> the column stats change after a manual VACUUM without ANALYZE.\n\nPossibly, but it doesn't have to happen there. It's not like the rules\naren't a bit different compared to autovacuum already. For example,\nthe way TOAST tables are handled by the VACUUM command versus\nautovacuum.\n\nEven if it's valuable to maintain this kind of VACUUM/autovacuum\nparity (which I tend to doubt), doesn't the same argument work almost\nas well with whatever stripped down version you come up with? It's\nalso confusing that a manual VACUUM command will be doing an\nANALYZE-like thing. Especially in cases where it's really expensive\nrelative to the work of VACUUM, because VACUUM scanned so few pages.\nYou just have to make some kind of trade-off.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:17:00 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 5:51 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't agree. But mainly my issue is that the devil you know (how this has\n> worked for a while) is preferrable to introducing an unknown quantity (your\n> patch that hasn't yet seen real world exposure).\n\nYeah, this is a major reason why I'm very leery about changes in this\narea. A lot of autovacuum behavior is emergent, in the sense that it\nwasn't directly intended by whoever wrote the code. It's just a\nconsequence of other decisions that probably seemed very reasonable at\nthe time they were made but turned out to have surprising and\nunpleasant consequences.\n\nIn this particular case, I think that there is a large risk that\npostponing auto-cancellation will make things significantly worse,\npossibly drastically worse, for a certain class of users -\nspecifically, those whose vacuums often get auto-cancelled. I think\nthat it's actually pretty common for people to have workloads where\nsomething pretty close to all of the autovacuums get auto-cancelled on\ncertain tables, and those people are always hard up against\nautovacuum_freeze_max_age because they *have* to hit that in order to\nget any vacuuming done on the affected tables. If the default\nthreshold for auto-cancellation goes up, those people will be\nvacuuming even less often than they are now.\n\nThat's why I really liked your idea of decoupling auto-cancellation\nfrom XID age. Such an approach can still avoid disabling\nauto-cancellation just because autovacuum_freeze_max_age has been hit,\nbut it can also disable it much earlier when it detects that doing so\nis necessary to make progress.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Jan 2023 08:47:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 5:47 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, this is a major reason why I'm very leery about changes in this\n> area. A lot of autovacuum behavior is emergent, in the sense that it\n> wasn't directly intended by whoever wrote the code. It's just a\n> consequence of other decisions that probably seemed very reasonable at\n> the time they were made but turned out to have surprising and\n> unpleasant consequences.\n\nI certainly agree with your general description of the ways things\nare. To a large degree we're living in a world where DBAs have already\ncompensated for some of the autovacuum shortcomings discussed on this\nthread. For example, by setting autovacuum_vacuum_scale_factor (and\neven autovacuum_vacuum_insert_scale_factor) to very low values, to\ncompensate for the issues with random sampling of dead tuples by\nanalyze, and to compensate for the way that VACUUM doesn't reason\ncorrectly about how the number of dead tuples changes as VACUUM runs.\nThey might not have thought of it that way -- it could have happened\nas a byproduct of tuning a production system through trial and error\n-- but it still counts as compensating for a defect in autovacuum\nscheduling IMV.\n\nIt's actually quite likely that even a strict improvement to (say)\nautovacuum scheduling will cause some number of regressions, since now\nwhat were effectively mitigations become unnecessary. This is somewhat\nsimilar to the dynamic with optimizer improvements, where (say) a\nselectivity estimate function that's better by every available metric\ncan still easily cause regressions that really cannot be blamed on the\nimprovement itself. I personally believe that it's a price worth\npaying when it comes to the issues with autovacuum statistics,\nparticularly the dead tuple count issues. Since much of the behavior\nthat we sometimes see is just absurdly bad. We have both water tight\ntheoretical arguments and practical experiences pointing in that\ndirection.\n\n> In this particular case, I think that there is a large risk that\n> postponing auto-cancellation will make things significantly worse,\n> possibly drastically worse, for a certain class of users -\n> specifically, those whose vacuums often get auto-cancelled.\n\nI agree that that's a real concern for the autocancellation side of\nthings. That seems quite different to the dead tuples accounting\nissues, in that nobody would claim that the existing behavior is\nflagrantly wrong (just that it sometimes causes problems instead of\npreventing them).\n\n> That's why I really liked your idea of decoupling auto-cancellation\n> from XID age. Such an approach can still avoid disabling\n> auto-cancellation just because autovacuum_freeze_max_age has been hit,\n> but it can also disable it much earlier when it detects that doing so\n> is necessary to make progress.\n\nTo be clear, I didn't think that that's what Andres was proposing, and\nmy recent v5 doesn't actually do that. Even in v5, it's still\nfundamentally impossible for autovacuums that are triggered by the\ntuples inserted or dead tuples thresholds to not be autocancellable.\n\nISTM that it doesn't make sense to condition the autocancellation\nbehavior on table XID age in the context of dead tuple VACUUMs. It\ncould either be way too early or way too late at that point. I was\nrather hoping to not have to build the infrastructure required for\nfully decoupling the autocancellation behavior from the triggering\ncondition (dead tuples vs table XID age) in the scope of this\nthread/patch, though I can see the appeal of that.\n\nThe only reason why I'm using table age at all is because that's how\nit works already, rightly or wrongly. If nothing else, t's pretty\nclear that there is no theoretical or practical reason why it has to\nbe exactly the same table age as the one for launching autovacuums to\nadvance relfrozenxid/relminmxid. In v5 of the patch, the default is to\nuse 1.8x of the threshold that initially makes autovacuum.c want to\nlaunch autovacuums to deal with table age. That's based on a\nsuggestion from Andres, but I'd be almost as happy with a default as\nlow as 1.1x or even 1.05x. That's going to make very little difference\nto those users that really rely on the no-auto-cancellation behavior,\nwhile at the same time making things a lot safer for scenarios like\nthe Joyent/Manta \"DROP TRIGGER\" outage (not perfectly safe, by any\nmeans, but meaningfully safer).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 20 Jan 2023 12:42:47 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 3:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The only reason why I'm using table age at all is because that's how\n> it works already, rightly or wrongly. If nothing else, t's pretty\n> clear that there is no theoretical or practical reason why it has to\n> be exactly the same table age as the one for launching autovacuums to\n> advance relfrozenxid/relminmxid. In v5 of the patch, the default is to\n> use 1.8x of the threshold that initially makes autovacuum.c want to\n> launch autovacuums to deal with table age. That's based on a\n> suggestion from Andres, but I'd be almost as happy with a default as\n> low as 1.1x or even 1.05x. That's going to make very little difference\n> to those users that really rely on the no-auto-cancellation behavior,\n> while at the same time making things a lot safer for scenarios like\n> the Joyent/Manta \"DROP TRIGGER\" outage (not perfectly safe, by any\n> means, but meaningfully safer).\n\nIt doesn't seem that way to me. What am I missing? In that case, the\nproblem was a DROP TRIGGER command waiting behind autovacuum's lock\nand thus causing all new table locks to wait behind DROP TRIGGER's\nlock request. But it does not sound like that was a one-off event. It\nsounds like they used DROP TRIGGER pretty regularly. So I think this\nsounds like exactly the kind of case I was talking about, where\nautovacuums keep getting cancelled until we decide to stop cancelling\nthem.\n\nIf so, then they were going to have a problem whenever that happened.\nDelaying the point at which we stop cancelling them would not help at\nall, as your patch would do. What about stopping cancelling them\nsooner, as with the proposal to switch to that behavior after a\ncertain number of auto-cancels? That doesn't prevent the problem\neither. If it's aggressive enough, it has some chance of making the\nproblem visible in a well-run test environment, which could\nconceivably prevent you from hitting it in production, but certainly\nthere are no guarantees.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Jan 2023 15:57:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 12:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> It doesn't seem that way to me. What am I missing? In that case, the\n> problem was a DROP TRIGGER command waiting behind autovacuum's lock\n> and thus causing all new table locks to wait behind DROP TRIGGER's\n> lock request. But it does not sound like that was a one-off event.\n\nIt's true that I cannot categorically state that it would have made\nthe crucial difference in this particular case. It comes down to two\nfactors:\n\n1. How many attempts would any given amount of additional XID space\nhead room have bought them in practice? We can be all but certain that\nthe smallest possible number is 1, which is something.\n\n2. Would that have been enough for relfrozenxid to be advanced in practice?\n\nI think that it's likely that the answer to 2 is yes, since there was\nno mention of bloat as a relevant factor at any point in the\npostmortem. It was all about locking characteristics of antiwraparound\nautovacuuming in particular, and its interaction with their\napplication. I think that they were perfectly okay with the autovacuum\ncancellation behavior most of the time. In fact, I don't think that\nthere was any bloat in the table at all -- it was a really huge table\n(likely an events table), and those tend to be append-only.\n\nEven if I'm wrong about this specific case (we'll never know for\nsure), the patch as written would be virtually guaranteed to make the\ncrucial differences in cases that I have seen up close. For example, a\ncase with TRUNCATE.\n\n> It sounds like they used DROP TRIGGER pretty regularly. So I think this\n> sounds like exactly the kind of case I was talking about, where\n> autovacuums keep getting cancelled until we decide to stop cancelling\n> them.\n\nI don't know how you can reach that conclusion. The chances of a\nnon-aggressive VACUUM advancing relfrozenxid right now are virtually\nzero, at least for a big table like this one. It seems quite likely\nthat plenty of non-aggressive autovacuums completed, or would have had\nthe insert-driven autovacuum feature been available.\n\nThe whole article was about how this DROP TRIGGER pattern worked just\nfine most of the time, because most of the time autovacuum was just\nautocancelled. They say this at one point:\n\n\"The normal autovacuum mechanism is skipped when locks are held in\norder to minimize service disruption. However, because transaction\nwraparound is such a severe problem, if the system gets too close to\nwraparound, an autovacuum is launched that does not back off under\nlock contention.\"\n\nAt another point:\n\n\"When the outage was resolved, we still had a number of questions: is\na wraparound autovacuum always so disruptive? Given that it was\nblocking all table operations, why does it throttle itself?\"\n\nISTM that it was a combination of aggressive vacuuming taking far\nlonger than usual (especially likely because this was pre freeze map),\nand the no-auto-cancel behavior. Aggressive/antiwraparound VACUUMs are\nnaturally much more likely to coincide with periodic DDL, just because\nthey take so much longer. That is a dangerous combination.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 20 Jan 2023 13:23:45 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> I think with a bit of polish \"Add autovacuum trigger instrumentation.\" ought\n> to be quickly mergeable.\n\nAny thoughts on v5-0001-*?\n\nIt would be nice to get the uncontentious part of all this (which is\nthe instrumentation patch) out of the way soon.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Jan 2023 18:37:59 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-21 18:37:59 -0800, Peter Geoghegan wrote:\n> On Tue, Jan 17, 2023 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think with a bit of polish \"Add autovacuum trigger instrumentation.\" ought\n> > to be quickly mergeable.\n> \n> Any thoughts on v5-0001-*?\n> \n> It would be nice to get the uncontentious part of all this (which is\n> the instrumentation patch) out of the way soon.\n\nIs https://www.postgresql.org/message-id/CAH2-WzmytCuSpaMEhv8H-jt8x_9whTi0T5bjNbH2gvaR0an2Pw%40mail.gmail.com\nthe last / relevant version of the patch to look at?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 21 Jan 2023 18:54:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 6:54 PM Andres Freund <andres@anarazel.de> wrote:\n> Is https://www.postgresql.org/message-id/CAH2-WzmytCuSpaMEhv8H-jt8x_9whTi0T5bjNbH2gvaR0an2Pw%40mail.gmail.com\n> the last / relevant version of the patch to look at?\n\nYes. I'm mostly just asking about v5-0001-* right now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 21 Jan 2023 19:19:58 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-18 17:54:27 -0800, Peter Geoghegan wrote:\n> From 0afaf310255a068d3c1ca9d2ce6f00118cbff890 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Fri, 25 Nov 2022 11:23:20 -0800\n> Subject: [PATCH v5 1/2] Add autovacuum trigger instrumentation.\n> \n> Add new instrumentation that lists a triggering condition in the server\n> log whenever an autovacuum is logged. This reports \"table age\" as the\n> triggering criteria when antiwraparound autovacuum runs (the XID age\n> trigger case and the MXID age trigger case are represented separately).\n> Other cases are reported as autovacuum trigger when the tuple insert\n> thresholds or the dead tuple thresholds were crossed.\n> \n> Author: Peter Geoghegan <pg@bowt.ie>\n> Reviewed-By: Andres Freund <andres@anarazel.de>\n> Reviewed-By: Jeff Davis <pgsql@j-davis.com>\n> Discussion: https://postgr.es/m/CAH2-Wz=S-R_2rO49Hm94Nuvhu9_twRGbTm6uwDRmRu-Sqn_t3w@mail.gmail.com\n> ---\n> src/include/commands/vacuum.h | 19 +++-\n> src/backend/access/heap/vacuumlazy.c | 5 ++\n> src/backend/commands/vacuum.c | 31 ++++++-\n> src/backend/postmaster/autovacuum.c | 124 ++++++++++++++++++---------\n> 4 files changed, 137 insertions(+), 42 deletions(-)\n> \n> diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h\n> index 689dbb770..13f70a1f6 100644\n> --- a/src/include/commands/vacuum.h\n> +++ b/src/include/commands/vacuum.h\n> @@ -191,6 +191,21 @@ typedef struct VacAttrStats\n> #define VACOPT_SKIP_DATABASE_STATS 0x100\t/* skip vac_update_datfrozenxid() */\n> #define VACOPT_ONLY_DATABASE_STATS 0x200\t/* only vac_update_datfrozenxid() */\n> \n> +/*\n> + * Values used by autovacuum.c to tell vacuumlazy.c about the specific\n> + * threshold type that triggered an autovacuum worker.\n> + *\n> + * AUTOVACUUM_NONE is used when VACUUM isn't running in an autovacuum worker.\n> + */\n> +typedef enum AutoVacType\n> +{\n> +\tAUTOVACUUM_NONE = 0,\n> +\tAUTOVACUUM_TABLE_XID_AGE,\n> +\tAUTOVACUUM_TABLE_MXID_AGE,\n> +\tAUTOVACUUM_DEAD_TUPLES,\n> +\tAUTOVACUUM_INSERTED_TUPLES,\n> +} AutoVacType;\n\nWhy is there TABLE_ in AUTOVACUUM_TABLE_XID_AGE but not\nAUTOVACUUM_DEAD_TUPLES? Both are on tables.\n\n\nWhat do you think about naming this VacuumTriggerType and adding an\nVAC_TRIG_MANUAL or such?\n\n\n> /*\n> * Values used by index_cleanup and truncate params.\n> *\n> @@ -222,7 +237,8 @@ typedef struct VacuumParams\n> \t\t\t\t\t\t\t\t\t\t\t * use default */\n> \tint\t\t\tmultixact_freeze_table_age; /* multixact age at which to scan\n> \t\t\t\t\t\t\t\t\t\t\t * whole table */\n> -\tbool\t\tis_wraparound;\t/* force a for-wraparound vacuum */\n> +\tbool\t\tis_wraparound;\t/* antiwraparound autovacuum? */\n> +\tAutoVacType trigger;\t\t/* autovacuum trigger condition, if any */\n\nThe comment change for is_wraparound seems a bit pointless, but whatever.\n\n\n> @@ -2978,7 +2995,10 @@ relation_needs_vacanalyze(Oid relid,\n> \t\t\t\t\t\t bool *doanalyze,\n> \t\t\t\t\t\t bool *wraparound)\n> {\n\nThe changes here are still bigger than I'd like, but ...\n\n\n> -\tbool\t\tforce_vacuum;\n> +\tTransactionId relfrozenxid = classForm->relfrozenxid;\n> +\tMultiXactId relminmxid = classForm->relminmxid;\n> +\tAutoVacType trigger = AUTOVACUUM_NONE;\n> +\tbool\t\ttableagevac;\n\nHere + below we end up with three booleans that just represent the choices in\nour fancy new enum. That seems awkward to me.\n\n\n\n> @@ -3169,14 +3212,15 @@ autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy)\n> static void\n> autovac_report_activity(autovac_table *tab)\n> {\n> -#define MAX_AUTOVAC_ACTIV_LEN (NAMEDATALEN * 2 + 56)\n> +#define MAX_AUTOVAC_ACTIV_LEN (NAMEDATALEN * 2 + 100)\n> \tchar\t\tactivity[MAX_AUTOVAC_ACTIV_LEN];\n> \tint\t\t\tlen;\n> \n> \t/* Report the command and possible options */\n> \tif (tab->at_params.options & VACOPT_VACUUM)\n> \t\tsnprintf(activity, MAX_AUTOVAC_ACTIV_LEN,\n> -\t\t\t\t \"autovacuum: VACUUM%s\",\n> +\t\t\t\t \"autovacuum for %s: VACUUM%s\",\n> +\t\t\t\t vac_autovacuum_trigger_msg(tab->at_params.trigger),\n> \t\t\t\t tab->at_params.options & VACOPT_ANALYZE ? \" ANALYZE\" : \"\");\n> \telse\n> \t\tsnprintf(activity, MAX_AUTOVAC_ACTIV_LEN,\n\nSomehow the added \"for ...\" sounds a bit awkward. \"autovacuum for table XID\nage\". Maybe \"autovacuum due to ...\"?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 23 Jan 2023 18:56:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Why is there TABLE_ in AUTOVACUUM_TABLE_XID_AGE but not\n> AUTOVACUUM_DEAD_TUPLES? Both are on tables.\n\nWhy does vacuum_freeze_table_age contain the word \"table\", while\nautovacuum_vacuum_scale_factor does not?\n\nTo me, \"table XID age\" is a less awkward term for \"relfrozenxid\nadvancing\", useful in contexts where it's probably more important to\nbe understood by non-experts than it is to be unambiguous. Besides,\nrelfrozenxid works at the level of the pg_class metadata. Nothing\nwhatsoever needs to have changed about the table itself, nor will\nanything necessarily be changed by VACUUM (except the relfrozenxid\nfield from pg_class).\n\n> What do you think about naming this VacuumTriggerType and adding an\n> VAC_TRIG_MANUAL or such?\n\nBut we're not doing anything with it in the context of manual VACUUMs.\nI'd prefer to keep this about what autovacuum.c thought needed to\nhappen, at least for as long as manual VACUUMs are something that\nautovacuum.c knows nothing about.\n\n> > - bool force_vacuum;\n> > + TransactionId relfrozenxid = classForm->relfrozenxid;\n> > + MultiXactId relminmxid = classForm->relminmxid;\n> > + AutoVacType trigger = AUTOVACUUM_NONE;\n> > + bool tableagevac;\n>\n> Here + below we end up with three booleans that just represent the choices in\n> our fancy new enum. That seems awkward to me.\n\nI don't follow. It's true that \"wraparound\" is still a synonym of\n\"tableagevac\" in 0001, but that changes in 0002. And even if you\nassume that 0002 won't get in, I think that it still makes sense to\nstructure it in a way that shows that in principle the \"wraparound\"\nbehaviors don't necessarily have to be used whenever \"tableagevac\" is\nin use.\n\n> > @@ -3169,14 +3212,15 @@ autovacuum_do_vac_analyze(autovac_table *tab, BufferAccessStrategy bstrategy)\n> > static void\n> > autovac_report_activity(autovac_table *tab)\n\n> Somehow the added \"for ...\" sounds a bit awkward. \"autovacuum for table XID\n> age\". Maybe \"autovacuum due to ...\"?\n\nThat works just as well IMV. I'll change it to that.\n\nAnything else for 0001? Would be nice to get it committed tomorrow.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 23 Jan 2023 19:22:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 4:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It sounds like they used DROP TRIGGER pretty regularly. So I think this\n> > sounds like exactly the kind of case I was talking about, where\n> > autovacuums keep getting cancelled until we decide to stop cancelling\n> > them.\n>\n> I don't know how you can reach that conclusion.\n\nI can accept that there might be some way I'm wrong about this in\ntheory, but your email then seems to go on to say that I'm right just\na few sentences later:\n\n> The whole article was about how this DROP TRIGGER pattern worked just\n> fine most of the time, because most of the time autovacuum was just\n> autocancelled. They say this at one point:\n>\n> \"The normal autovacuum mechanism is skipped when locks are held in\n> order to minimize service disruption. However, because transaction\n> wraparound is such a severe problem, if the system gets too close to\n> wraparound, an autovacuum is launched that does not back off under\n> lock contention.\"\n\nIf this isn't arguing in favor of exactly what I'm saying, I don't\nknow what that would look like.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:21:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 11:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The whole article was about how this DROP TRIGGER pattern worked just\n> > fine most of the time, because most of the time autovacuum was just\n> > autocancelled. They say this at one point:\n> >\n> > \"The normal autovacuum mechanism is skipped when locks are held in\n> > order to minimize service disruption. However, because transaction\n> > wraparound is such a severe problem, if the system gets too close to\n> > wraparound, an autovacuum is launched that does not back off under\n> > lock contention.\"\n>\n> If this isn't arguing in favor of exactly what I'm saying, I don't\n> know what that would look like.\n\nI'm happy to clear that up. What you said was:\n\n\"So I think this sounds like exactly the kind of case I was talking about, where\nautovacuums keep getting cancelled until we decide to stop cancelling them.\nIf so, then they were going to have a problem whenever that happened.\"\n\nJust because *some* autovacuums get cancelled doesn't mean they *all*\nget cancelled. And, even if the rate is quite high, that may not be\nmuch of a problem in itself (especially now that we have the freeze\nmap). 200 million XIDs usually amounts to a lot of wall clock time.\nEven if it is rather difficult to finish up, we only have to get lucky\nonce.\n\nThe fact that autovacuum eventually got to the point of requiring an\nantiwraparound autovacuum on the problematic table does indeed\nstrongly suggest that any other, earlier autovacuums were relatively\nunlikely to have advanced relfrozenxid in the end -- or at least\ncouldn't on this one occasion. But that in itself is just not relevant\nto our current discussion, since even the tiniest perturbation would\nhave been enough to prevent a non-aggressive VACUUM from being able to\nadvance relfrozenxid. Before 15, non-aggressive VACUUMs would throw\naway the opportunity to do so just because they couldn't immediately\nget a cleanup lock on one single heap page.\n\nIt's quite possible that most or all prior aggressive VACUUMs were not\nantiwraparound autovacuums, because the dead tuples accounting was\nenough to launch an autovacuum at some point after age(relfrozenxid)\nexceeded vacuum_freeze_table_age that was still before it could reach\nautovacuum_freeze_max_age. That would give you a cancellable\naggressive VACUUM -- a VACUUM that actually has a non-zero chance of\nadvancing relfrozenxid.\n\nSure, it's possible that such a cancellable aggressive autovacuum was\nindeed cancelled, and that that factor made the crucial difference.\nBut I find it far easier to believe that there simply was no such\naggressive autovacuum in the first place (not this time), since it\ncould have only happened when autovacuum thinks that there are\nsufficiently many dead tuples to justify launching an autovacuum in\nthe first place. Which, as we now all accept, is based on highly\ndubious sampling by ANALYZE. So I think it's much more likely to be\nthat factor (dead tuple accounting is bad), as well as the absurd\nfalse dichotomy between aggressive and non-aggressive -- plus the\nissue at hand, the auto-cancellation behavior.\n\nI don't claim to know what is inevitable, or what is guaranteed to\nwork or not work. I only claim that we can meaningfully reduce the\nabsolute risk by using a fairly simple approach, principally by not\nneedlessly coupling the auto-cancellation behavior to *all*\nautovacuums that are specifically triggered by age(relfrozenxid). As\nAndres said at one point, doing those two things at exactly the same\ntime is just arbitrary.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 24 Jan 2023 12:32:56 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 19:22:18 -0800, Peter Geoghegan wrote:\n> On Mon, Jan 23, 2023 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > Why is there TABLE_ in AUTOVACUUM_TABLE_XID_AGE but not\n> > AUTOVACUUM_DEAD_TUPLES? Both are on tables.\n> \n> Why does vacuum_freeze_table_age contain the word \"table\", while\n> autovacuum_vacuum_scale_factor does not?\n\nI don't know. But that's not really a reason to introduce more oddities.\n\n\n> To me, \"table XID age\" is a less awkward term for \"relfrozenxid\n> advancing\", useful in contexts where it's probably more important to\n> be understood by non-experts than it is to be unambiguous. Besides,\n> relfrozenxid works at the level of the pg_class metadata. Nothing\n> whatsoever needs to have changed about the table itself, nor will\n> anything necessarily be changed by VACUUM (except the relfrozenxid\n> field from pg_class).\n\nI'd just go for \"xid age\", I don't see a point in adding 'table', particularly\nwhen you don't for dead tuples.\n\n\n> > What do you think about naming this VacuumTriggerType and adding an\n> > VAC_TRIG_MANUAL or such?\n> \n> But we're not doing anything with it in the context of manual VACUUMs.\n\nIt's a member of a struct passed to the routines handling both manual and\ninteractive vacuum. And we could e.g. eventually start replace\nIsAutoVacuumWorkerProcess() checks with it - which aren't e.g. going to work\nwell if we add parallel index vacuuming support to autovacuum.\n\n\n\n> I'd prefer to keep this about what autovacuum.c thought needed to\n> happen, at least for as long as manual VACUUMs are something that\n> autovacuum.c knows nothing about.\n\nIt's an enum defined in a general header, not something in autovacuum.c - so I\ndon't really buy this.\n\n\n> > > - bool force_vacuum;\n> > > + TransactionId relfrozenxid = classForm->relfrozenxid;\n> > > + MultiXactId relminmxid = classForm->relminmxid;\n> > > + AutoVacType trigger = AUTOVACUUM_NONE;\n> > > + bool tableagevac;\n> >\n> > Here + below we end up with three booleans that just represent the choices in\n> > our fancy new enum. That seems awkward to me.\n> \n> I don't follow. It's true that \"wraparound\" is still a synonym of\n> \"tableagevac\" in 0001, but that changes in 0002. And even if you\n> assume that 0002 won't get in, I think that it still makes sense to\n> structure it in a way that shows that in principle the \"wraparound\"\n> behaviors don't necessarily have to be used whenever \"tableagevac\" is\n> in use.\n\nYou have booleans tableagevac, deadtupvac, inserttupvac. Particularly the\nlatter ones really are just a rephrasing of the trigger:\n\n+\ttableagevac = true;\n+\t*wraparound = false;\n+\t/* See header comments about trigger precedence */\n+\tif (TransactionIdIsNormal(relfrozenxid) &&\n+\t\tTransactionIdPrecedes(relfrozenxid, xidForceLimit))\n+\t\ttrigger = AUTOVACUUM_TABLE_XID_AGE;\n+\telse if (MultiXactIdIsValid(relminmxid) &&\n+\t\t\t MultiXactIdPrecedes(relminmxid, multiForceLimit))\n+\t\ttrigger = AUTOVACUUM_TABLE_MXID_AGE;\n+\telse\n+\t\ttableagevac = false;\n+\n+\t/* User disabled non-table-age autovacuums in pg_class.reloptions? */\n+\tif (!av_enabled && !tableagevac)\n\n...\n\n+\t\tdeadtupvac = (vactuples > vacthresh);\n+\t\tinserttupvac = (vac_ins_base_thresh >= 0 && instuples > vacinsthresh);\n+\t\t/* See header comments about trigger precedence */\n+\t\tif (!tableagevac)\n+\t\t{\n+\t\t\tif (deadtupvac)\n+\t\t\t\ttrigger = AUTOVACUUM_DEAD_TUPLES;\n+\t\t\telse if (inserttupvac)\n+\t\t\t\ttrigger = AUTOVACUUM_INSERTED_TUPLES;\n+\t\t}\n+\n \t\t/* Determine if this table needs vacuum or analyze. */\n-\t\t*dovacuum = force_vacuum || (vactuples > vacthresh) ||\n-\t\t\t(vac_ins_base_thresh >= 0 && instuples > vacinsthresh);\n+\t\t*dovacuum = (tableagevac || deadtupvac || inserttupvac);\n\n\nI find this to be awkward code. The booleans are kinda pointless, and the\ntableagevac case is hard to follow because trigger is set elsewhere.\n\nI can give reformulating it a go. Need to make some food first.\n\n\nI suspect that the code would look better if we didn't continue to have\n\"bool *dovacuum\" and the trigger. They're redundant.\n\n\n> Anything else for 0001? Would be nice to get it committed tomorrow.\n\nSorry, today was busy with meetings and bashing my head against AIX.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 Jan 2023 20:59:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-24 20:59:04 -0800, Andres Freund wrote:\n> I find this to be awkward code. The booleans are kinda pointless, and the\n> tableagevac case is hard to follow because trigger is set elsewhere.\n>\n> I can give reformulating it a go. Need to make some food first.\n\nHere's a draft of what I am thinking of. Not perfect yet, but I think it looks\nbetter.\n\nThe pg_stat_activity output looks like this right now:\n\nautovacuum due to table XID age: VACUUM public.pgbench_accounts (to prevent wraparound)\n\nWhy don't we drop the \"(to prevent wraparound)\" now?\n\nAnd I still think removing the \"table \" bit would be an improvement.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 25 Jan 2023 01:19:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 3:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Sure, it's possible that such a cancellable aggressive autovacuum was\n> indeed cancelled, and that that factor made the crucial difference.\n> But I find it far easier to believe that there simply was no such\n> aggressive autovacuum in the first place (not this time), since it\n> could have only happened when autovacuum thinks that there are\n> sufficiently many dead tuples to justify launching an autovacuum in\n> the first place. Which, as we now all accept, is based on highly\n> dubious sampling by ANALYZE. So I think it's much more likely to be\n> that factor (dead tuple accounting is bad), as well as the absurd\n> false dichotomy between aggressive and non-aggressive -- plus the\n> issue at hand, the auto-cancellation behavior.\n\nIn my opinion, this is too speculative to justify making changes to\nthe behavior. I'm not here to defend the way we do dead tuple\naccounting. I think it's a hot mess. But whether or not it played any\nrole in this catastrophe is hard to say. The problems with dead tuple\naccounting are, as I understand it, all about statistical\nindependence. That is, we make assumptions that what's true of the\nsample is likely to be true of the whole table when in reality it may\nnot be true at all. Perhaps it's even unlikely to be true. But the\nkinds of problems you get from assuming statistical independence tend\nto hit users very unevenly. We make similar assumptions about\nselectivity estimation: unless there are extended statistics, we take\nP(a=1 and b=1) = P(a=1)*P(b = 1), which can be vastly and dramatically\nwrong. People can and do get extremely bad query plans as a result of\nthat assumption. However, other people run PostgreSQL for years and\nyears and never really have big problems with it. I'd say that it's\ncompletely fair to describe this as a big problem, but we can't\ntherefore conclude that some particular user has this problem, not\neven if we know that they have a slow query and we know that it's due\nto a bad plan. And similarly here, I don't see a particular reason to\nthink that your theory about what happened is more likely than mine. I\nfreely admit that yours could be right, just as you admitted that mine\ncould be right. But I think we really just don't know.\n\nIt feels unlikely to me that there was ONE cancellable aggressive\nautovacuum and that it got cancelled. I think that it's probably\neither ZERO or LOTS, depending on whether the dead tuple threshold was\never reached. If it wasn't, then it must have been zero. But if it\nwas, and the first autovacuum worker to visit that table got\ncancelled, then the next one would try again. And\nautovacuum_naptime=1m, so if we're not out of autovacuum workers,\nwe're going to retry that table every minute. If we do run out of\nautovacuum workers, which is pretty likely, we'll still launch new\nworkers in that database as often as we can given when other workers\nexit. If the system is very tight on autovacuum capacity, probably\nbecause the cost limit is too low, then you could have a situation\nwhere only one try gets made before we hit autovacuum_freeze_max_age.\nOtherwise, though, a single failed try would probably lead to trying a\nwhole lot more times after that, and you only hit\nautovacuum_freeze_max_age if all those attempts fail.\n\nAt the risk of repeating myself, here's what bugs me. If we suppose\nthat your intuition is right and no aggressive autovacuum happened\nbefore autovacuum_freeze_max_age was reached, then what you are\nproposing will make things better. But if we suppose that my intuition\nis right and many aggressive autovacuums happened before\nautovacuum_freeze_max_age was reached, then what you are proposing\nwill make things worse, because if we've been auto-cancelling\nrepeatedly we're probably going to keep doing so until we shut that\nbehavior off, and we want a vacuum to succeed sooner rather than\nlater. So it doesn't feel obvious to me that we should change\nanything. Even if we knew what had happened for certain in this\nparticular case, I don't know how we can possibly know what is typical\nin similar cases.\n\nMy personal support experience has been that cases where autovacuum\nruns a lot but doesn't solve the problem for some reason are a lot\nmore common than cases where it doesn't run when it should have done.\nThat probably accounts for my intuition about what is likely to have\nhappened here. But as my colleagues are often at pains to point out to\nme, my experiences aren't representative of what happens to PostgreSQL\nusers generally for all kinds of reasons, and therefore sometimes my\nintuition is wrong. But since I have nobody else's experiences to use\nin lieu of my own, I don't know what else I can use to judge anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Jan 2023 10:28:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 1:19 AM Andres Freund <andres@anarazel.de> wrote:\n> Here's a draft of what I am thinking of. Not perfect yet, but I think it looks\n> better.\n\nI'm afraid that I will be unable to do any more work on this project.\nI have withdrawn it from the CF app.\n\nIf you would like to complete some or all of the patches yourself, in\npart or in full, I have no objections.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 22:41:34 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Decoupling antiwraparound autovacuum from special rules around\n auto cancellation"
}
] |
[
{
"msg_contents": "Hi,\n\nFound documents about parallel scan may be not so accurate.\n\nAs said in parallel.smgl:\n\n```\nIn a parallel sequential scan, the table's blocks will be divided among the cooperating processes. Blocks are handed out one at a time, so that access to the table remains sequential.\n```\n\nTo my understanding, this was right before. Because we return one block if a worker ask for before commit 56788d2156.\nAs comments inside table_block_parallelscan_nextpage:\n```\nEarlier versions of this would allocate the next highest block number to the next worker to call this function.\n```\nAnd from commit 56788d2156, each parallel worker will try to get ranges of blocks “chunks\".\nAccess to the table remains sequential inside each worker’s process, but not across all workers or the parallel query.\nShall we update the documents?\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nFound documents about parallel scan may be not so accurate.\n\nAs said in parallel.smgl:\n\n```\nIn a parallel sequential scan, the table's blocks will be divided among the cooperating processes. Blocks are handed out one at a time, so that access to the table remains sequential.\n```\n\nTo my understanding, this was right before. Because we return one block if a worker ask for before commit 56788d2156.\nAs comments inside table_block_parallelscan_nextpage:\n```\nEarlier versions of this would allocate the next highest block number to the next worker to call this function.\n```\nAnd from commit 56788d2156, each parallel worker will try to get ranges of blocks “chunks\".\nAccess to the table remains sequential inside each worker’s process, but not across all workers or the parallel query.\nShall we update the documents?\n\n\nRegards,\nZhang Mingli",
"msg_date": "Thu, 20 Oct 2022 11:02:52 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Documentation refinement for Parallel Scans"
},
{
"msg_contents": "On Thu, 20 Oct 2022 at 16:03, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> As said in parallel.smgl:\n>\n> In a parallel sequential scan, the table's blocks will be divided among the cooperating processes. Blocks are handed out one at a time, so that access to the table remains sequential.\n\n> Shall we update the documents?\n\nYeah, 56788d215 should have updated that. Seems I didn't expect that\nlevel of detail in the docs. I've attached a patch to address this.\n\nI didn't feel the need to go into too much detail about how the sizes\nof the ranges are calculated. I tried to be brief, but I think I did\nleave enough in there so that a reader will know that we don't just\nmake the range length <nblocks> / <nworkers>.\n\nI'll push this soon if nobody has any other wording suggestions.\n\nThanks for the report.\n\nDavid",
"msg_date": "Thu, 20 Oct 2022 19:33:34 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation refinement for Parallel Scans"
},
{
"msg_contents": "On Thu, 20 Oct 2022 at 19:33, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll push this soon if nobody has any other wording suggestions.\n\nPushed.\n\nThanks for the report.\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Oct 2022 09:31:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation refinement for Parallel Scans"
}
] |
[
{
"msg_contents": "Having a sup_user and a normal_user, login with sup_user\nselect session_user, current_user\nsup_user, sup_user\n\nset role normal_user;\nselect session_user, current_user\nsup_user, normal_user\n\nBut then, when sup_user was running with normal_user grants an exception\noccurs\nselect * from Some_Schema.Some_Table;\n\nI was running with SET ROLE NORMAL_USER but I cannot see that info on LOG\n\nuser_name;error_severity;message\nsup_user;ERROR;permission denied for schema Some_Schema\n\nWould be good to have on LOG session_user / current_user if they differ,\nwhat do you think ?\nWhich one is better\n- Put session_user / current_user on same %u prefix and fill current_user\nonly if it differs from session_user ?\n- Create another prefix for it, %o for example\nthanks,\nMarcos\n\nHaving a sup_user and a normal_user, login with sup_userselect session_user, current_user sup_user, sup_userset role normal_user;select session_user, current_user sup_user, normal_userBut then, when sup_user was running with normal_user grants an exception occursselect * from Some_Schema.Some_Table;I was running with SET ROLE NORMAL_USER but I cannot see that info on LOGuser_name;error_severity;messagesup_user;ERROR;permission denied for schema Some_SchemaWould be good to have on LOG session_user / current_user if they differ, what do you think ?Which one is better- Put session_user / current_user on same %u prefix and fill current_user only if it differs from session_user ?- Create another prefix for it, %o for examplethanks,Marcos",
"msg_date": "Thu, 20 Oct 2022 08:35:21 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?=E2=80=8Bsession=5Fuser_and_current=5Fuser_on_LOG?="
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that\n select date_part('millennium', now()); --> 3\n\nwill execute also, unperturbed, in this form:\n select date_part('millennium xxxxx', now()); --> 3\n\nBy the same token\n\n select extract(millennium from now()) --> 3\n select extract(millenniumxxxxxxxxx from now()) --> 3\n\nThis laxness occurs in all releases, and with 'millennium', \n'millisecond', and 'microsecond' (at least).\n\nEven though it's not likely to cause much real-life headaches, and I \nhesitate to call it a real bug, perhaps it would be better if it could \nbe a bit stricter.\n\nThanks,\n\nErik Rijkers\n\n\n",
"msg_date": "Thu, 20 Oct 2022 14:45:50 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "date_part/extract parse curiosity"
},
{
"msg_contents": "\nOn Thu, 20 Oct 2022 at 20:45, Erik Rijkers <er@xs4all.nl> wrote:\n> Hi,\n>\n> I noticed that\n> select date_part('millennium', now()); --> 3\n>\n> will execute also, unperturbed, in this form:\n> select date_part('millennium xxxxx', now()); --> 3\n>\n> By the same token\n>\n> select extract(millennium from now()) --> 3\n> select extract(millenniumxxxxxxxxx from now()) --> 3\n>\n> This laxness occurs in all releases, and with 'millennium',\n> 'millisecond', and 'microsecond' (at least).\n>\n> Even though it's not likely to cause much real-life headaches, and I\n> hesitate to call it a real bug, perhaps it would be better if it could\n> be a bit stricter.\n>\n\nAccording to the documentation [1], the extract() only has some field names,\nhowever, the code use strncmp() to compare the units and tokens.\n\n int\n DecodeUnits(int field, char *lowtoken, int *val)\n {\n int type;\n const datetkn *tp;\n \n tp = deltacache[field];\n /* use strncmp so that we match truncated tokens */ <---- here\n if (tp == NULL || strncmp(lowtoken, tp->token, TOKMAXLEN) != 0)\n {\n tp = datebsearch(lowtoken, deltatktbl, szdeltatktbl);\n }\n if (tp == NULL)\n {\n type = UNKNOWN_FIELD;\n *val = 0;\n }\n else\n {\n deltacache[field] = tp;\n type = tp->type;\n *val = tp->value;\n }\n \n return type;\n }\n\nThis is convenient for field names such as millennium and millenniums,\nhowever it also valid for millenniumxxxxxxxxxxxx, which is looks strange.\n\nMaybe we should document this. I'd be inclined to change the code to\nmatch the certain valid field names.\n\nAny thoughts?\n\n[1] https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 20 Oct 2022 21:57:34 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: date_part/extract parse curiosity"
},
{
"msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> On Thu, 20 Oct 2022 at 20:45, Erik Rijkers <er@xs4all.nl> wrote:\n>> I noticed that\n>> select date_part('millennium', now()); --> 3\n>> \n>> will execute also, unperturbed, in this form:\n>> select date_part('millennium xxxxx', now()); --> 3\n\n> Maybe we should document this. I'd be inclined to change the code to\n> match the certain valid field names.\n\nI think changing this behavior has a significant chance of drawing\ncomplaints and zero chance of making anyone happier.\n\nThe current state of affairs (including the lack of unnecessary\ndocumentation detail) is likely quite intentional.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Oct 2022 10:12:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: date_part/extract parse curiosity"
},
{
"msg_contents": "\nOn Thu, 20 Oct 2022 at 22:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> On Thu, 20 Oct 2022 at 20:45, Erik Rijkers <er@xs4all.nl> wrote:\n>>> I noticed that\n>>> select date_part('millennium', now()); --> 3\n>>> \n>>> will execute also, unperturbed, in this form:\n>>> select date_part('millennium xxxxx', now()); --> 3\n>\n>> Maybe we should document this. I'd be inclined to change the code to\n>> match the certain valid field names.\n>\n> I think changing this behavior has a significant chance of drawing\n> complaints and zero chance of making anyone happier.\n>\n\nMaybe.\n\n> The current state of affairs (including the lack of unnecessary\n> documentation detail) is likely quite intentional.\n>\n\nI'm curious about why not document this?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Thu, 20 Oct 2022 22:43:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: date_part/extract parse curiosity"
}
] |
[
{
"msg_contents": "Suppose that, for some reason, you want to use pg_basebackup on a\nLinux machine to back up a database cluster on a Windows machine.\nSuppose further that you attempt to use the -T option. Then you might\nrun afoul of this check:\n\n /*\n * This check isn't absolutely necessary. But all tablespaces are created\n * with absolute directories, so specifying a non-absolute path here would\n * just never match, possibly confusing users. It's also good to be\n * consistent with the new_dir check.\n */\n if (!is_absolute_path(cell->old_dir))\n pg_fatal(\"old directory is not an absolute path in tablespace\nmapping: %s\",\n cell->old_dir);\n\nThe problem is that the definition of is_absolute_path() here differs\ndepending on whether you are on Windows or not. So this code is, I\nthink, subtly incorrect. What it is testing is whether the\nuser-specified pathname is an absolute pathname *on the local machine*\nwhereas what it should be testing is whether the user-specified\npathname is an absolute pathname *on the remote machine*. There's no\nproblem if both sides are Windows or neither side is Windows, but if\nthe remote side is and the local side isn't, then something like\n-TC:\\foo=/backup/foo will fail. As far as I know, there's no reason\nwhy that shouldn't be permitted to work.\n\nWhat this check is actually intending to prevent, I believe, is\nsomething like -T../mytablespace=/bkp/ts1, because that wouldn't\nactually work: the value in the list will be an absolute path. The\ntablespace wouldn't get remapped, and the user might be confused about\nwhy it didn't, so it is good that we tell them what they did wrong.\nHowever, I think we could relax the check a little bit, something\nalong the lines of !is_nonwindows_absolute_path(cell->old_dir) &&\n!is_windows_absolute_path(dir). We can't actually know whether the\nremote side is Windows or non-Windows, but if the string we're given\nis plausibly an absolute path under either set of conventions, it's\nprobably fine to just search the list for it and see if it shows up.\n\nThis would have the disadvantage that if a Linux user creates a\ntablespace directory inside $PGDATA and gives it a name like\n/home/rhaas/pgdata/C:\\Program Files\\PostgreSQL\\Data, and then attempts\na backup with '-TC:\\Program Files\\PostgreSQL\\Data=/tmp/ts1' it will\nnot relocate the tablespace, yet the user won't get a message\nexplaining why. I'm prepared to dismiss that scenario as \"not a real\nuse case\".\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Oct 2022 11:11:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "cross-platform pg_basebackup"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> However, I think we could relax the check a little bit, something\n> along the lines of !is_nonwindows_absolute_path(cell->old_dir) &&\n> !is_windows_absolute_path(dir). We can't actually know whether the\n> remote side is Windows or non-Windows, but if the string we're given\n> is plausibly an absolute path under either set of conventions, it's\n> probably fine to just search the list for it and see if it shows up.\n\nSeems reasonable.\n\n> This would have the disadvantage that if a Linux user creates a\n> tablespace directory inside $PGDATA and gives it a name like\n> /home/rhaas/pgdata/C:\\Program Files\\PostgreSQL\\Data, and then attempts\n> a backup with '-TC:\\Program Files\\PostgreSQL\\Data=/tmp/ts1' it will\n> not relocate the tablespace, yet the user won't get a message\n> explaining why. I'm prepared to dismiss that scenario as \"not a real\n> use case\".\n\nAgreed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Oct 2022 12:17:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > However, I think we could relax the check a little bit, something\n> > along the lines of !is_nonwindows_absolute_path(cell->old_dir) &&\n> > !is_windows_absolute_path(dir). We can't actually know whether the\n> > remote side is Windows or non-Windows, but if the string we're given\n> > is plausibly an absolute path under either set of conventions, it's\n> > probably fine to just search the list for it and see if it shows up.\n>\n> Seems reasonable.\n\nCool. Here's a patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 20 Oct 2022 13:04:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Cool. Here's a patch.\n\nLGTM, except I'd be inclined to ensure that all the macros\nare function-style, ie\n\n+#define IS_DIR_SEP(ch) IS_NONWINDOWS_DIR_SEP(ch)\n\nnot just\n\n+#define IS_DIR_SEP IS_NONWINDOWS_DIR_SEP\n\nI don't recall the exact rules, but I know that the second style\ncan lead to expanding the macro in more cases, which we likely\ndon't want. It also seems like better documentation to show\nthe expected arguments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Oct 2022 13:28:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 1:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Cool. Here's a patch.\n>\n> LGTM, except I'd be inclined to ensure that all the macros\n> are function-style, ie\n>\n> +#define IS_DIR_SEP(ch) IS_NONWINDOWS_DIR_SEP(ch)\n>\n> not just\n>\n> +#define IS_DIR_SEP IS_NONWINDOWS_DIR_SEP\n>\n> I don't recall the exact rules, but I know that the second style\n> can lead to expanding the macro in more cases, which we likely\n> don't want. It also seems like better documentation to show\n> the expected arguments.\n\nOK, thanks. v2 attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 20 Oct 2022 14:47:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "\nOn 2022-10-20 Th 14:47, Robert Haas wrote:\n> On Thu, Oct 20, 2022 at 1:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> Cool. Here's a patch.\n>> LGTM, except I'd be inclined to ensure that all the macros\n>> are function-style, ie\n>>\n>> +#define IS_DIR_SEP(ch) IS_NONWINDOWS_DIR_SEP(ch)\n>>\n>> not just\n>>\n>> +#define IS_DIR_SEP IS_NONWINDOWS_DIR_SEP\n>>\n>> I don't recall the exact rules, but I know that the second style\n>> can lead to expanding the macro in more cases, which we likely\n>> don't want. It also seems like better documentation to show\n>> the expected arguments.\n> OK, thanks. v2 attached.\n>\n\n\nLooks good.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 20 Oct 2022 15:12:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "Hi,\nPatch v2 looks good to me, I have tested it, and pg_basebackup works fine\nacross the platforms (Windows to Linux and Linux to Windows).\nSyntax used for testing\n$ pg_basebackup -h remote_server_ip -p 5432 -U user_name -D backup/data -T\nolddir=newdir\n\nI have also tested with non-absolute paths, it behaves as expected.\n\nOn Fri, Oct 21, 2022 at 12:42 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-10-20 Th 14:47, Robert Haas wrote:\n> > On Thu, Oct 20, 2022 at 1:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> Cool. Here's a patch.\n> >> LGTM, except I'd be inclined to ensure that all the macros\n> >> are function-style, ie\n> >>\n> >> +#define IS_DIR_SEP(ch) IS_NONWINDOWS_DIR_SEP(ch)\n> >>\n> >> not just\n> >>\n> >> +#define IS_DIR_SEP IS_NONWINDOWS_DIR_SEP\n> >>\n> >> I don't recall the exact rules, but I know that the second style\n> >> can lead to expanding the macro in more cases, which we likely\n> >> don't want. It also seems like better documentation to show\n> >> the expected arguments.\n> > OK, thanks. v2 attached.\n> >\n>\n>\n> Looks good.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n>\n\n-- \nRegards,\nDavinder\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,Patch v2 looks good to me, I have tested it, and pg_basebackup works fine across the platforms (Windows to Linux and Linux to Windows).Syntax used for testing$ pg_basebackup -h remote_server_ip -p 5432 -U user_name -D backup/data -T olddir=newdir I have also tested with non-absolute paths, it behaves as expected.On Fri, Oct 21, 2022 at 12:42 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-10-20 Th 14:47, Robert Haas wrote:\n> On Thu, Oct 20, 2022 at 1:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> Cool. Here's a patch.\n>> LGTM, except I'd be inclined to ensure that all the macros\n>> are function-style, ie\n>>\n>> +#define IS_DIR_SEP(ch) IS_NONWINDOWS_DIR_SEP(ch)\n>>\n>> not just\n>>\n>> +#define IS_DIR_SEP IS_NONWINDOWS_DIR_SEP\n>>\n>> I don't recall the exact rules, but I know that the second style\n>> can lead to expanding the macro in more cases, which we likely\n>> don't want. It also seems like better documentation to show\n>> the expected arguments.\n> OK, thanks. v2 attached.\n>\n\n\nLooks good.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n-- Regards,DavinderEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Oct 2022 13:44:33 +0530",
"msg_from": "davinder singh <davindersingh2692@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 4:14 AM davinder singh\n<davindersingh2692@gmail.com> wrote:\n> Hi,\n> Patch v2 looks good to me, I have tested it, and pg_basebackup works fine across the platforms (Windows to Linux and Linux to Windows).\n> Syntax used for testing\n> $ pg_basebackup -h remote_server_ip -p 5432 -U user_name -D backup/data -T olddir=newdir\n>\n> I have also tested with non-absolute paths, it behaves as expected.\n\nCool. Thanks to you, Andrew, and Tom for reviewing.\n\nCommitted and back-patched to all supported branches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Oct 2022 09:15:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 21, 2022 at 09:15:39AM -0400, Robert Haas wrote:\n>\n> Committed and back-patched to all supported branches.\n\nIs there any additional things to be taken care of or should\nhttps://commitfest.postgresql.org/40/3954/ be closed?\n\n\n",
"msg_date": "Mon, 24 Oct 2022 10:44:26 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cross-platform pg_basebackup"
},
{
"msg_contents": "On Sun, Oct 23, 2022 at 10:44 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, Oct 21, 2022 at 09:15:39AM -0400, Robert Haas wrote:\n> > Committed and back-patched to all supported branches.\n>\n> Is there any additional things to be taken care of or should\n> https://commitfest.postgresql.org/40/3954/ be closed?\n\nAs far as I know we're done. I have closed that entry.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Oct 2022 12:07:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cross-platform pg_basebackup"
}
] |
[
{
"msg_contents": "While playing with a proposed patch, I noticed that a session crashes\nafter a failed call to pg_backup_start().\n\npostgres=# select pg_backup_start(repeat('x', 1026));\nERROR: backup label too long (max 1024 bytes)\npostgres=# \\q\n> TRAP: failed Assert(\"during_backup_start ^ (sessionBackupState == SESSION_BACKUP_RUNNING)\"), File: \"xlog.c\", Line: 8846, PID: 165835\n\nSurprisingly this happens by a series of succussful calls to\npg_backup_start and stop.\n\npostgres=# select pg_backup_start('x');\npostgres=# select pg_backup_top();\npostgres=# \\q\n> TRAP: failed Assert(\"durin..\n\n\n>> do_pg_abort_backup(int code, Datum arg)\n> \t/* Only one of these conditions can be true */\n>\tAssert(during_backup_start ^\n>\t\t (sessionBackupState == SESSION_BACKUP_RUNNING));\n\nIt seems to me that the comment is true and the condition is a thinko.\nThis is introduced by df3737a651 so it is master only.\n\nPlease find the attached fix.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 21 Oct 2022 16:10:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 3:10 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> It seems to me that the comment is true and the condition is a thinko.\n\n\nYeah, the two conditions could be both false. How about we update the\ncomment a bit to emphasize this? Such as\n\n /* At most one of these conditions can be true */\nor\n /* These conditions can not be both true */\n\n\n> Please find the attached fix.\n\n\n+1\n\nThanks\nRichard\n\nOn Fri, Oct 21, 2022 at 3:10 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nIt seems to me that the comment is true and the condition is a thinko. Yeah, the two conditions could be both false. How about we update thecomment a bit to emphasize this? Such as /* At most one of these conditions can be true */or /* These conditions can not be both true */ \nPlease find the attached fix. +1ThanksRichard",
"msg_date": "Fri, 21 Oct 2022 17:53:25 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 05:53:25PM +0800, Richard Guo wrote:\n> Yeah, the two conditions could be both false. How about we update the\n> comment a bit to emphasize this? Such as\n> \n> /* At most one of these conditions can be true */\n> or\n> /* These conditions can not be both true */\n\nIf you do that, it would be a bit easier to read as of the following\nassertion instead? Like:\nAssert(!during_backup_start ||\n sessionBackupState == SESSION_BACKUP_NONE);\n\nPlease note that I have not checked in details all the interactions\nbehind register_persistent_abort_backup_handler() before entering in\ndo_pg_backup_start() and the ERROR_CLEANUP block used in this\nroutine (just a matter of some elog(ERROR)s put here and there, for\nexample). Anyway, yes, both conditions can be false, and that's easy\nto reproduce:\n1) Do pg_backup_start().\n2) Do pg_backup_stop().\n3) Stop the session to kick do_pg_abort_backup()\n4) Assert()-boom.\n--\nMichael",
"msg_date": "Fri, 21 Oct 2022 22:35:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 7:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 21, 2022 at 05:53:25PM +0800, Richard Guo wrote:\n> > Yeah, the two conditions could be both false. How about we update the\n> > comment a bit to emphasize this? Such as\n> >\n> > /* At most one of these conditions can be true */\n> > or\n> > /* These conditions can not be both true */\n>\n> If you do that, it would be a bit easier to read as of the following\n> assertion instead? Like:\n> Assert(!during_backup_start ||\n> sessionBackupState == SESSION_BACKUP_NONE);\n>\n> Please note that I have not checked in details all the interactions\n> behind register_persistent_abort_backup_handler() before entering in\n> do_pg_backup_start() and the ERROR_CLEANUP block used in this\n> routine (just a matter of some elog(ERROR)s put here and there, for\n> example). Anyway, yes, both conditions can be false, and that's easy\n> to reproduce:\n> 1) Do pg_backup_start().\n> 2) Do pg_backup_stop().\n> 3) Stop the session to kick do_pg_abort_backup()\n> 4) Assert()-boom.\n\nI'm wondering if we need the assertion at all. We know that when the\narg is true or the sessionBackupState is SESSION_BACKUP_RUNNING, the\nrunningBackups would've been incremented and we can just go ahead and\ndecrement it, like the attached patch. This is a cleaner approach IMO\nunless I'm missing something here.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Oct 2022 20:37:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On 2022-Oct-21, Michael Paquier wrote:\n\n> On Fri, Oct 21, 2022 at 05:53:25PM +0800, Richard Guo wrote:\n\n> > /* These conditions can not be both true */\n> \n> If you do that, it would be a bit easier to read as of the following\n> assertion instead? Like:\n> Assert(!during_backup_start ||\n> sessionBackupState == SESSION_BACKUP_NONE);\n\nMy intention here was that the Assert should be inside the block, that\nis, we already know that at least one is true, and we want to make sure\nthat they are not *both* true.\n\nAFAICT the attached patch also fixes the bug without making the assert\nweaker.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Sat, 22 Oct 2022 09:56:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Sat, Oct 22, 2022 at 1:26 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-21, Michael Paquier wrote:\n>\n> > On Fri, Oct 21, 2022 at 05:53:25PM +0800, Richard Guo wrote:\n>\n> > > /* These conditions can not be both true */\n> >\n> > If you do that, it would be a bit easier to read as of the following\n> > assertion instead? Like:\n> > Assert(!during_backup_start ||\n> > sessionBackupState == SESSION_BACKUP_NONE);\n>\n> My intention here was that the Assert should be inside the block, that\n> is, we already know that at least one is true, and we want to make sure\n> that they are not *both* true.\n>\n> AFAICT the attached patch also fixes the bug without making the assert\n> weaker.\n\n+ /* We should be here only by one of these reasons, never both */\n+ Assert(during_backup_start ^\n+ (sessionBackupState == SESSION_BACKUP_RUNNING));\n+\n\nWhat's the problem even if we're here when both of them are true? The\nrunningBackups is incremented anyways, right? Why can't we just get\nrid of the Assert and treat during_backup_start as\nbackup_marked_active_in_shmem or something like that to keep things\nsimple?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 22 Oct 2022 13:35:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On 2022-Oct-22, Bharath Rupireddy wrote:\n\n> + /* We should be here only by one of these reasons, never both */\n> + Assert(during_backup_start ^\n> + (sessionBackupState == SESSION_BACKUP_RUNNING));\n> +\n> \n> What's the problem even if we're here when both of them are true?\n\nIn what case should we be there with both conditions true?\n\n> The runningBackups is incremented anyways, right?\n\nIn the current code, yes, but it seems to be easier to reason about if\nwe know precisely why we're there and whether we should be running the\ncleanup or not. Otherwise we might end up with a bug where we run the\nfunction but it doesn't do anything because we failed to understand the\npreconditions. At the very least, this forces a developer changing this\ncode to think through it.\n\n> Why can't we just get rid of the Assert and treat during_backup_start\n> as backup_marked_active_in_shmem or something like that to keep things\n> simple?\n\nWhy is that simpler?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n",
"msg_date": "Sat, 22 Oct 2022 10:26:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Sat, Oct 22, 2022 at 1:56 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > Why can't we just get rid of the Assert and treat during_backup_start\n> > as backup_marked_active_in_shmem or something like that to keep things\n> > simple?\n>\n> Why is that simpler?\n\nIMO, the assertion looks complex there and thinking if we can remove it.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 22 Oct 2022 14:08:13 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "At Sat, 22 Oct 2022 09:56:06 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Oct-21, Michael Paquier wrote:\n> \n> > On Fri, Oct 21, 2022 at 05:53:25PM +0800, Richard Guo wrote:\n> \n> > > /* These conditions can not be both true */\n> > \n> > If you do that, it would be a bit easier to read as of the following\n> > assertion instead? Like:\n> > Assert(!during_backup_start ||\n> > sessionBackupState == SESSION_BACKUP_NONE);\n> \n> My intention here was that the Assert should be inside the block, that\n> is, we already know that at least one is true, and we want to make sure\n> that they are not *both* true.\n> \n> AFAICT the attached patch also fixes the bug without making the assert\n> weaker.\n\nI'm fine with either of them, but..\n\nThe reason for that works the same way is that the if() block excludes\nthe case of (!during_backup_start && S_B_RUNNING)<*1>. In other words\nthe strictness is a kind of illusion [*a]. Actually the assertion does\nnot detect the case <*1>. In this regard, moving the current\nassertion into the if() block might be confusing.\n\nregards,\n\n<*1>: It's evidently superfluous but \"strictness\" and \"illusion\" share\n the exactly the same pronounciation in Japanese \"Ghen-Ka-Ku\".\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 24 Oct 2022 11:42:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 11:42:58AM +0900, Kyotaro Horiguchi wrote:\n> At Sat, 22 Oct 2022 09:56:06 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n>> My intention here was that the Assert should be inside the block, that\n>> is, we already know that at least one is true, and we want to make sure\n>> that they are not *both* true.\n>> \n>> AFAICT the attached patch also fixes the bug without making the assert\n>> weaker.\n\nOn the contrary, it seems to me that putting the assertion within the\nif() block makes the assertion weaker, because we would never check\nfor an incorrect state after do_pg_abort_backup() is registered (aka\nany pg_backup_start() call) when not entering in this if() block.\n\nSaying that, if you feel otherwise I am fine with your conclusion as\nwell, so feel free to solve this issue as you see fit. :p\n--\nMichael",
"msg_date": "Mon, 24 Oct 2022 14:19:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On 2022-Oct-24, Michael Paquier wrote:\n\n> On the contrary, it seems to me that putting the assertion within the\n> if() block makes the assertion weaker, because we would never check\n> for an incorrect state after do_pg_abort_backup() is registered (aka\n> any pg_backup_start() call) when not entering in this if() block.\n\nReading it again, I agree with your conclusion, so I'll push as you\nproposed with some extra tests, after they complete running.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n",
"msg_date": "Mon, 24 Oct 2022 11:39:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 11:39:19AM +0200, Alvaro Herrera wrote:\n> Reading it again, I agree with your conclusion, so I'll push as you\n> proposed with some extra tests, after they complete running.\n\nThanks for the fix, Álvaro!\n--\nMichael",
"msg_date": "Tue, 25 Oct 2022 09:40:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Crash after a call to pg_backup_start()"
}
] |
[
{
"msg_contents": "Hello\n\nI've had this patch sitting in a local branch for way too long. It's a\ntrivial thing but for some reason it bothered me: we let the partition \nstrategy flow into the backend as a string and only parse it into the\ncatalog 1-char version quite late.\n\nThis patch makes gram.y responsible for parsing it and passing it down\nas a value from a new enum, which looks more neat. Because it's an\nenum, some \"default:\" cases can be removed in a couple of places. I\nalso added a new elog() in case the catalog contents becomes broken.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)",
"msg_date": "Fri, 21 Oct 2022 11:32:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "parse partition strategy string in gram.y"
},
{
"msg_contents": "\nOn Fri, 21 Oct 2022 at 17:32, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hello\n>\n> I've had this patch sitting in a local branch for way too long. It's a\n> trivial thing but for some reason it bothered me: we let the partition \n> strategy flow into the backend as a string and only parse it into the\n> catalog 1-char version quite late.\n>\n> This patch makes gram.y responsible for parsing it and passing it down\n> as a value from a new enum, which looks more neat. Because it's an\n> enum, some \"default:\" cases can be removed in a couple of places. I\n> also added a new elog() in case the catalog contents becomes broken.\n\nDoes there an error about forget the LIST partition?\n\n+/*\n+ * Parse a user-supplied partition strategy string into parse node\n+ * PartitionStrategy representation, or die trying.\n+ */\n+static PartitionStrategy\n+parsePartitionStrategy(char *strategy)\n+{\n+ if (pg_strcasecmp(strategy, \"range\") == 0) <-- it should be list\n+ return PARTITION_STRATEGY_RANGE; <-- PARTITION_STRATEGY_LIST\n+ else if (pg_strcasecmp(strategy, \"hash\") == 0)\n+ return PARTITION_STRATEGY_HASH;\n+ else if (pg_strcasecmp(strategy, \"range\") == 0)\n+ return PARTITION_STRATEGY_RANGE;\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"unrecognized partitioning strategy \\\"%s\\\"\",\n+ strategy)));\n+}\n+\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 21 Oct 2022 18:05:18 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "On 2022-Oct-21, Japin Li wrote:\n\n> Does there an error about forget the LIST partition?\n\nOf course.\nhttps://cirrus-ci.com/build/4721735111540736\n\nThis is what you get for moving cases around at the last minute ...\n\nFixed, thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 21 Oct 2022 12:12:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "\nOn Fri, 21 Oct 2022 at 18:12, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Oct-21, Japin Li wrote:\n>\n>> Does there an error about forget the LIST partition?\n>\n> Of course.\n> https://cirrus-ci.com/build/4721735111540736\n>\n> This is what you get for moving cases around at the last minute ...\n>\n\nIs there any way to get the regression tests diffs from Cirrus CI?\nI did not find the diffs in [1].\n\n[1] https://cirrus-ci.com/build/4721735111540736\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 21 Oct 2022 18:22:44 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "On 2022-Oct-21, Japin Li wrote:\n\n> Is there any way to get the regression tests diffs from Cirrus CI?\n> I did not find the diffs in [1].\n\nI think they should be somewhere in the artifacts, but I'm not sure.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n",
"msg_date": "Fri, 21 Oct 2022 12:26:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "headerscheck fail, fixed here.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)",
"msg_date": "Fri, 21 Oct 2022 12:38:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 06:22:44PM +0800, Japin Li wrote:\n> Is there any way to get the regression tests diffs from Cirrus CI?\n> I did not find the diffs in [1].\n> \n> [1] https://cirrus-ci.com/build/4721735111540736\n\nThey're called \"main\".\nI'm planning on submitting a patch to rename it to \"regress\", someday.\nSee also: https://www.postgresql.org/message-id/20221001161420.GF6256%40telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 21 Oct 2022 07:34:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "\nOn Fri, 21 Oct 2022 at 20:34, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Oct 21, 2022 at 06:22:44PM +0800, Japin Li wrote:\n>> Is there any way to get the regression tests diffs from Cirrus CI?\n>> I did not find the diffs in [1].\n>> \n>> [1] https://cirrus-ci.com/build/4721735111540736\n>\n> They're called \"main\".\n> I'm planning on submitting a patch to rename it to \"regress\", someday.\n> See also: https://www.postgresql.org/message-id/20221001161420.GF6256%40telsasoft.com\n\nOh, thank you very much! I find it in testrun/build/testrun/main/regress [1].\n\n[1] https://api.cirrus-ci.com/v1/artifact/task/6215926717612032/testrun/build/testrun/main/regress/regression.diffs\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 21 Oct 2022 21:46:18 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "Is there a reason why HASH partitioning does not currently support range partition bounds, where the values in the partition bounds would refer to the hashed value?\r\n\r\nThe advantage of hash partition bounds is that they are not domain-specific, as they are for ordinary RANGE partitions, but they are more flexible than MODULUS/REMAINDER partition bounds.\r\n\r\nOn 10/21/22, 9:48 AM, \"Japin Li\" <japinli@hotmail.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n On Fri, 21 Oct 2022 at 20:34, Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n > On Fri, Oct 21, 2022 at 06:22:44PM +0800, Japin Li wrote:\r\n >> Is there any way to get the regression tests diffs from Cirrus CI?\r\n >> I did not find the diffs in [1].\r\n >>\r\n >> [1] https://cirrus-ci.com/build/4721735111540736\r\n >\r\n > They're called \"main\".\r\n > I'm planning on submitting a patch to rename it to \"regress\", someday.\r\n > See also: https://www.postgresql.org/message-id/20221001161420.GF6256%40telsasoft.com\r\n\r\n Oh, thank you very much! I find it in testrun/build/testrun/main/regress [1].\r\n\r\n [1] https://api.cirrus-ci.com/v1/artifact/task/6215926717612032/testrun/build/testrun/main/regress/regression.diffs\r\n\r\n --\r\n Regrads,\r\n Japin Li.\r\n ChengDu WenWu Information Technology Co.,Ltd.\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 24 Oct 2022 14:32:02 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "On 2022-Oct-24, Finnerty, Jim wrote:\n\n> Is there a reason why HASH partitioning does not currently support\n> range partition bounds, where the values in the partition bounds would\n> refer to the hashed value?\n\nJust lack of an implementation, I suppose.\n\n> The advantage of hash partition bounds is that they are not\n> domain-specific, as they are for ordinary RANGE partitions, but they\n> are more flexible than MODULUS/REMAINDER partition bounds.\n\nWell, modulus/remainder is what we have. If you have ideas for a\ndifferent implementation, let's hear them. I suppose we would have to\nknow about both the user interface and how it would internally, from two\nperspectives: how does tuple routing work (ie. how to match a tuple's\nvalues to a set of bound values), and how does partition pruning work\n(ie. how do partition bounds match a query's restriction clauses).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 24 Oct 2022 18:13:17 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Oct-24, Finnerty, Jim wrote:\n>> The advantage of hash partition bounds is that they are not\n>> domain-specific, as they are for ordinary RANGE partitions, but they\n>> are more flexible than MODULUS/REMAINDER partition bounds.\n\nI'm more than a bit skeptical of that claim. Under what\ncircumstances (other than a really awful hash function,\nperhaps) would it make sense to not use equi-sized hash\npartitions? If you can predict that more stuff is going\nto go into one partition than another, then you need to\nfix your hash function, not invent more complication for\nthe core partitioning logic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Oct 2022 20:50:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "It will often happen that some hash keys are more frequently referenced than others. Consider a scenario where customer_id is the hash key, and one customer is very large in terms of their activity, like IBM, and other keys have much less activity. This asymmetry creates a noisy neighbor problem. Some partitions may have more than one noisy neighbor, and in general it would be more flexible to be able to divide the work evenly in terms of activity instead of evenly with respect to the encoding of the keys.\r\n\r\nOn 10/24/22, 8:50 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\r\n > On 2022-Oct-24, Finnerty, Jim wrote:\r\n >> The advantage of hash partition bounds is that they are not\r\n >> domain-specific, as they are for ordinary RANGE partitions, but they\r\n >> are more flexible than MODULUS/REMAINDER partition bounds.\r\n\r\n I'm more than a bit skeptical of that claim. Under what\r\n circumstances (other than a really awful hash function,\r\n perhaps) would it make sense to not use equi-sized hash\r\n partitions? \r\n\r\n<snip>\r\n\r\n regards, tom lane\r\n\r\n",
"msg_date": "Tue, 25 Oct 2022 14:18:51 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "Or if you know the frequencies of the highly frequent values of the partitioning key at the time the partition bounds are defined, you could define hash ranges that contain approximately the same number of rows in each partition. A parallel sequential scan of all partitions would then perform better because data skew is minimized. \r\n\r\n",
"msg_date": "Tue, 25 Oct 2022 18:36:27 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "On 2022-Oct-25, Finnerty, Jim wrote:\n\n> Or if you know the frequencies of the highly frequent values of the\n> partitioning key at the time the partition bounds are defined, you\n> could define hash ranges that contain approximately the same number of\n> rows in each partition. A parallel sequential scan of all partitions\n> would then perform better because data skew is minimized. \n\nThis sounds very much like list partitioning to me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n",
"msg_date": "Wed, 26 Oct 2022 01:15:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "On 2022-Oct-26, Alvaro Herrera wrote:\n\n> On 2022-Oct-25, Finnerty, Jim wrote:\n> \n> > Or if you know the frequencies of the highly frequent values of the\n> > partitioning key at the time the partition bounds are defined, you\n> > could define hash ranges that contain approximately the same number of\n> > rows in each partition. A parallel sequential scan of all partitions\n> > would then perform better because data skew is minimized. \n> \n> This sounds very much like list partitioning to me.\n\n... or maybe you mean \"if the value is X then use this specific\npartition, otherwise use hash partitioning\". It's a bit like\nmulti-level partitioning, but not really.\n\n(You could test this idea by using two levels, list partitioning on top\nwith a default partition which is in turn partitioned by hash; but this\nis unlikely to work well for large scale in practice. Or does it?)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n",
"msg_date": "Wed, 26 Oct 2022 01:23:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
},
{
"msg_contents": "Pushed this.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 3 Nov 2022 16:42:17 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: parse partition strategy string in gram.y"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on some BRIN code, I discovered a bug in handling NULL\nvalues - when inserting a non-NULL value into a NULL-only range, we\nreset the all_nulls flag but don't update the has_nulls flag. And\nbecause of that we then fail to return the range for IS NULL ranges.\n\nReproducing this is trivial:\n\n create table t (a int);\n create index on t using brin (a);\n insert into t values (null);\n insert into t values (1);\n\n set enable_seqscan = off;\n select * from t where a is null;\n\nThis should return 1 row, but actually it returns no rows.\n\nAttached is a patch fixing this by properly updating the has_nulls flag.\n\nI reproduced this all the way back to 9.5, so it's a long-standing bug.\nIt's interesting no one noticed / reported it so far, it doesn't seem\nlike a particularly rare corner case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 21 Oct 2022 17:23:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On Fri, 21 Oct 2022 at 17:24, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> While working on some BRIN code, I discovered a bug in handling NULL\n> values - when inserting a non-NULL value into a NULL-only range, we\n> reset the all_nulls flag but don't update the has_nulls flag. And\n> because of that we then fail to return the range for IS NULL ranges.\n\nAh, that's bad.\n\nOne question though: doesn't (shouldn't?) column->bv_allnulls already\nimply column->bv_hasnulls? The column has nulls if all of the values\nare null, right? Or is the description of the field deceptive, and\ndoes bv_hasnulls actually mean \"has nulls bitmap\"?\n\n> Attached is a patch fixing this by properly updating the has_nulls flag.\n\nOne comment on the patch:\n\n> +SET enable_seqscan = off;\n> + [...]\n> +SET enable_seqscan = off;\n\nLooks like duplicated SETs. Should that last one be RESET instead?\n\nApart from that, this patch looks good.\n\n- Matthias\n\n\n",
"msg_date": "Fri, 21 Oct 2022 17:50:32 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "\n\nOn 10/21/22 17:50, Matthias van de Meent wrote:\n> On Fri, 21 Oct 2022 at 17:24, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> While working on some BRIN code, I discovered a bug in handling NULL\n>> values - when inserting a non-NULL value into a NULL-only range, we\n>> reset the all_nulls flag but don't update the has_nulls flag. And\n>> because of that we then fail to return the range for IS NULL ranges.\n> \n> Ah, that's bad.\n> \n\nYeah, I guess we'll need to inform the users to consider rebuilding BRIN\nindexes on NULL-able columns.\n\n> One question though: doesn't (shouldn't?) column->bv_allnulls already\n> imply column->bv_hasnulls? The column has nulls if all of the values\n> are null, right? Or is the description of the field deceptive, and\n> does bv_hasnulls actually mean \"has nulls bitmap\"?\n> \n\nWhat null bitmap do you mean? We're talking about summary for a page\nrange - IIRC we translate this to nullbitmap for a BRIN tuple, but there\nmay be multiple columns, and \"has nulls bitmap\" is an aggregate over all\nof them.\n\nYeah, maybe it'd make sense to also have has_nulls=true whenever\nall_nulls=true, and maybe it'd be simpler because it'd be enough to\ncheck just one flag in consistent function etc. But we still need to\ntrack 2 different states - \"has nulls\" and \"has summary\".\n\nIn any case, this ship sailed long ago - at least for the existing\nopclasses.\n\n\n>> Attached is a patch fixing this by properly updating the has_nulls flag.\n> \n> One comment on the patch:\n> \n>> +SET enable_seqscan = off;\n>> + [...]\n>> +SET enable_seqscan = off;\n> \n> Looks like duplicated SETs. Should that last one be RESET instead?\n> \n\nYeah, should have been RESET.\n\n> Apart from that, this patch looks good.\n> \n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 21 Oct 2022 18:44:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 10/21/22 18:44, Tomas Vondra wrote:\n> \n> ...\n>\n>> Apart from that, this patch looks good.\n>>\n\nSadly, I don't think we can fix it like this :-(\n\nThe problem is that all ranges start with all_nulls=true, because the\nnew range gets initialized by brin_memtuple_initialize() like that. But\nthis happens for *every* range before we even start processing the rows.\nSo this way all the ranges would end up with has_nulls=true, making that\nflag pretty useless.\n\nActually, even just doing \"truncate\" on the table creates such all-nulls\nrange for the first range, and serializes it to disk.\n\nI wondered why we even write such tuples for \"empty\" ranges to disk, for\nexample after \"TRUNCATE\" - the table is empty by definition, so how come\nwe write all-nulls brin summary for the first range?\n\nFor example brininsert() checks if the brin tuple was modified and needs\nto be written back, but brinbuild() just ignores that, and initializes\n(and writes) writes the tuple to disk anyway. I think we should not do\nthat - there should be a flag in BrinBuildState, tracking if the BRIN\ntuple was modified, and we should only write it if it's true.\n\nThat means we should never get an on-disk summary representing nothing.\n\nThat doesn't fix the issue, though, because we still need to pass the\nmemtuple tuple to the add_value opclass procedure, and whether it sets\nthe has_nulls flag depends on whether it's a new tuple representing no\nother rows (in which case has_nulls remains false) or whether it was\nread from disk (in which case it needs to be flipped to 'true').\n\nBut the opclass has no way to tell the difference at the moment - it\njust gets the BrinMemTuple. So we'd have to extend this, somehow.\n\nI wonder how to do this in a back-patchable way - we can't add\nparameters to the opclass procedure, and the other solution seems to be\nstoring it right in the BrinMemTuple, somehow. But that's likely an ABI\nbreak :-(\n\nThe only solution I can think of is actually passing it using all_nulls\nand has_nulls - we could set both flags to true (which we never do now)\nand teach the opclass that it signifies \"empty\" (and thus not to update\nhas_nulls after resetting all_nulls).\n\nSomething like the attached (I haven't added any more tests, not sure\nwhat would those look like - I can't think of a query testing this,\nalthough maybe we could check how the flags change using pageinspect).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 22 Oct 2022 02:30:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 2022-Oct-22, Tomas Vondra wrote:\n\n> I wonder how to do this in a back-patchable way - we can't add\n> parameters to the opclass procedure, and the other solution seems to be\n> storing it right in the BrinMemTuple, somehow. But that's likely an ABI\n> break :-(\n\nHmm, I don't see the ABI incompatibility. BrinMemTuple is an in-memory\nstructure, so you can add new members at the end of the struct and it\nwill pose no problems to existing code.\n\n> The only solution I can think of is actually passing it using all_nulls\n> and has_nulls - we could set both flags to true (which we never do now)\n> and teach the opclass that it signifies \"empty\" (and thus not to update\n> has_nulls after resetting all_nulls).\n> \n> Something like the attached (I haven't added any more tests, not sure\n> what would those look like - I can't think of a query testing this,\n> although maybe we could check how the flags change using pageinspect).\n\nI'll try to have a look at these patches tomorrow or on Monday.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n\n\n",
"msg_date": "Sat, 22 Oct 2022 10:00:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 10/22/22 10:00, Alvaro Herrera wrote:\n> On 2022-Oct-22, Tomas Vondra wrote:\n> \n>> I wonder how to do this in a back-patchable way - we can't add\n>> parameters to the opclass procedure, and the other solution seems to be\n>> storing it right in the BrinMemTuple, somehow. But that's likely an ABI\n>> break :-(\n> \n> Hmm, I don't see the ABI incompatibility. BrinMemTuple is an in-memory\n> structure, so you can add new members at the end of the struct and it\n> will pose no problems to existing code.\n> \n\nBut we're not passing BrinMemTuple to the opclass procedures - we're\npassing a pointer to BrinValues, so we'd have to add the flag there. And\nwe're storing an array of those, so adding a field may shift the array\neven if you add it at the end. Not sure if that's OK or not.\n\n>> The only solution I can think of is actually passing it using all_nulls\n>> and has_nulls - we could set both flags to true (which we never do now)\n>> and teach the opclass that it signifies \"empty\" (and thus not to update\n>> has_nulls after resetting all_nulls).\n>>\n>> Something like the attached (I haven't added any more tests, not sure\n>> what would those look like - I can't think of a query testing this,\n>> although maybe we could check how the flags change using pageinspect).\n> \n> I'll try to have a look at these patches tomorrow or on Monday.\n> \n\nI was experimenting with this a bit more, and unfortunately the latest\npatch is still a few bricks shy - it did fix this particular issue, but\nthere were other cases that remained/got broken. See the first patch,\nthat adds a bunch of pageinspect tests testing different combinations.\n\nAfter thinking about it a bit more, I think we can't quite fix this at\nthe opclass level, so the yesterday's patches are wrong. Instead, this\nshould be fixed in values_add_to_range() - the whole trick is we need to\nremember the range was empty at the beginning, and only set the flag\nwhen allnulls is false.\n\nThe reworked patch does that. And we can use the same logic (both flags\nset mean no tuples were added to the range) when building the index, a\nseparate flag is not needed.\n\nThis slightly affects existing regression tests, because we won't create\nany ranges for empty table (now we created one, because we initialized a\ntuple in brinbuild and then wrote it to disk). This means that\nbrin_summarize_range now returns 0, but I think that's fine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 22 Oct 2022 15:47:43 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Here's an improved version of the fix I posted about a month ago.\n\n0001\n\nAdds tests demonstrating the issue, as before. I realized there's an\nisolation test in src/test/module/brin that can demonstrate this, so I\nmodified it too, not just the pageinspect test as before.\n\n\n0002\n\nUses the combination of all_nulls/has_nulls to identify \"empty\" range,\nand does not store them to disk. I however realized not storing \"empty\"\nranges is probably not desirable. Imagine a table with a \"gap\" (e.g. due\nto a batch DELETE) of pages with no rows:\n\n create table x (a int) with (fillfactor = 10);\n insert into x select i from generate_series(1,1000) s(i);\n delete from x where a < 1000;\n create index on x using brin(a) with (pages_per_range=1);\n\nAny bitmap index scan using this index would have to scan all those\nempty ranges, because there are no summaries.\n\n\n0003\n\nStill uses the all_nulls/has_nulls flags to identify empty ranges, but\nstores them - and then we check the combination in bringetbitmap() to\nskip those ranges as not matching any scan keys.\n\nThis also restores some of the existing behavior - for example creating\na BRIN index on entirely empty table (no pages at all) still allocates a\n48kB index (3 index pages, 3 fsm pages). Seems a bit strange, but it's\nan existing behavior.\n\n\nAs explained before, I've considered adding an new flag to one of the\nBRIN structs - BrinMemTuple or BrinValues. But we can't add as last\nfield to BrinMemTuple because there already is FLEXIBLE_ARRAY_MEMBER,\nand adding a field to BrinValues would change stride of the bt_columns\narray. So this would break ABI, making this not backpatchable.\n\nFurthermore, if we want to store summaries for empty ranges (which is\nwhat 0003 does), we need to store the flag in the BRIN index tuple. And\nwe can't change the on-disk representation in backbranches, so encoding\nthis in the existing tuple seems like the only way.\n\nSo using the combination of all_nulls/has_nulls flag seems like the only\nviable option, unfortunately.\n\nOpinions? Considering this will need to be backpatches, it'd be good to\nget some feedback on the approach. I think it's fine, but it would be\nunfortunate to fix one issue but break BRIN in a different way.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 28 Nov 2022 01:13:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 01:13:14AM +0100, Tomas Vondra wrote:\n> Opinions? Considering this will need to be backpatches, it'd be good to\n> get some feedback on the approach. I think it's fine, but it would be\n> unfortunate to fix one issue but break BRIN in a different way.\n\n> --- a/contrib/pageinspect/Makefile\n> +++ b/contrib/pageinspect/Makefile\n> @@ -22,7 +22,7 @@ DATA = pageinspect--1.10--1.11.sql \\\n> \tpageinspect--1.0--1.1.sql\n> PGFILEDESC = \"pageinspect - functions to inspect contents of database pages\"\n> \n> -REGRESS = page btree brin gin gist hash checksum oldextversions\n> +REGRESS = page btree brin gin gist hash checksum oldextversions brinbugs\n\nI can't comment on the patch itself, but:\n\nThese changes to ./Makefile will also need to be made in ./meson.build.\n\nAlso (per cirrusci), the test sometimes fail since two parallel tests\nare doing \"CREATE EXTENSION\".\n\n\n",
"msg_date": "Tue, 29 Nov 2022 14:38:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Hi,\n\nhere's an improved and cleaned-up version of the fix.\n\nI removed brinbugs.sql from pageinspect, because it seems enough to have\nthe other tests (I added brinbugs first, before realizing those exist).\nThis also means meson.build is fine and there are no tests doing CREATE\nEXTENSION concurrently etc.\n\nI decided to go with the 0003 approach, which stores summaries for empty\nranges. That seems to be less intrusive (it's more like what we do now),\nand works better for tables with a lot of bulk deletes. It means we can\nhave ranges with allnulls=hasnulls=true, which wasn't the case before,\nbut I don't see why this should break e.g. custom opclasses (if it does,\nit probably means the opclass is wrong).\n\nFinally, I realized union_tuples needs to be tweaked to deal with empty\nranges properly. The changes are fairly limited, though.\n\nI plan to push this into master right at the beginning of January, and\nthen backpatch a couple days later.\n\nI still feel a bit uneasy about tweaking this, but I don't think there's\na better way than reusing the existing flags.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Dec 2022 01:18:36 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 01:18:36AM +0100, Tomas Vondra wrote:\n> +\t\t * Does the range already has NULL values? Either of the flags can\n\nshould say: \"already have NULL values\"\n\n> +\t\t * If we had NULLS, and the opclass didn't set allnulls=true, set\n> +\t\t * the hasnulls so that we know there are NULL values.\n\nYou could remove \"the\" before \"hasnulls\".\nOr say \"clear hasnulls so that..\"\n\n> @@ -585,6 +587,13 @@ brin_deform_tuple(BrinDesc *brdesc, BrinTuple *tuple, BrinMemTuple *dMemtuple)\n> \t{\n> \t\tint\t\t\ti;\n> \n> +\t\t/*\n> +\t\t * Make sure to overwrite the hasnulls flag, because it was initialized\n> +\t\t * to true by brin_memtuple_initialize and we don't want to skip it if\n> +\t\t * allnulls.\n\nDoes \"if allnulls\" mean \"if allnulls is true\" ?\nIt's a bit unclear.\n\n> +\t\t */\n> +\t\tdtup->bt_columns[keyno].bv_hasnulls = hasnulls[keyno];\n> +\n> \t\tif (allnulls[keyno])\n> \t\t{\n> \t\t\tvalueno += brdesc->bd_info[keyno]->oi_nstored;\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 6 Jan 2023 18:37:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Thanks Justin! I've applied all the fixes you proposed, and (hopefully)\nimproved a couple other comments.\n\nI've been working on this over the past couple days, trying to polish\nand commit it over the weekend - both into master and backbranches.\nSadly, the backpatching part turned out to be a bit more complicated\nthan I expected, because of the BRIN reworks in PG14 (done by me, as\nfoundation for the new opclasses, so ... well).\n\nAnyway, I got it done, but it's a bit uglier than I hoped for and I\ndon't feel like pushing this on Sunday midnight. I think it's correct,\nbut maybe another pass to polish it a bit more is better.\n\nSo here are two patches - one for 11-13, the other for 14-master.\n\nThere's also a separate patch with pageinspect tests, but only as a\ndemonstration of various (non)broken cases, not for commit. And then\nalso a bash script generating indexes with random data, randomized\nsummarization etc. - on unpatched systems this happens to fail in about\n1/3 of the runs (at least for me). I haven't seen any failures with the\npatches attached (on any branch).\n\nAs for the issue / fix, I don't think there's a better solution than\nwhat the patch does - we need to distinguish empty / all-nulls ranges,\nbut we can't add a flag because of on-disk format / ABI. So using the\nexisting flags seems like the only option - I haven't heard any other\nideas so far, and I couldn't come up with any myself either.\n\nI've also thought about alternative \"encodings\" into allnulls/hasnulls,\ninstead of treating (true,true) as \"empty\" - but none of that ended up\nbeing any simpler, quite the opposite actually, as it would change what\nthe individual flags mean etc. So AFAICS this is the best / least\ndisruptive option.\n\nI went over all the places touching these flags, to double check if any\nof those needs some tweaks (similar to union_tuples, which I missed for\na long time). But I haven't found anything else, so I think this version\nof the patches is complete.\n\nAs for assessing how many indexes are affected - in principle, any index\non columns with NULLs may be broken. But it only matters if the index is\nused for IS NULL queries, other queries are not affected.\n\nI also realized that this only affects insertion of individual tuples\ninto existing all-null summaries, not \"bulk\" summarization that sees all\nvalues at once. This happens because in this case add_values_to_range\nsets hasnulls=true for the first (NULL) value, and then calls the\naddValue procedure for the second (non-NULL) one, which resets the\nallnulls flag to false.\n\nBut when inserting individual rows, we first set hasnulls=true, but\nbrin_form_tuple ignores that because of allnulls=true. And then when\ninserting the second row, we start with hasnulls=false again, and the\nopclass quietly resets the allnulls flag.\n\nI guess this further reduces the number of broken indexes, especially\nfor data sets with small null_frac, or for append-only (or -mostly)\ntables where most of the summarization is bulk.\n\nI still feel a bit uneasy about this, but I think the patch is solid.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Jan 2023 00:34:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 1/9/23 00:34, Tomas Vondra wrote:\n> \n> I've been working on this over the past couple days, trying to polish\n> and commit it over the weekend - both into master and backbranches.\n> Sadly, the backpatching part turned out to be a bit more complicated\n> than I expected, because of the BRIN reworks in PG14 (done by me, as\n> foundation for the new opclasses, so ... well).\n> \n> Anyway, I got it done, but it's a bit uglier than I hoped for and I\n> don't feel like pushing this on Sunday midnight. I think it's correct,\n> but maybe another pass to polish it a bit more is better.\n> \n> So here are two patches - one for 11-13, the other for 14-master.\n> \n\nI spent a bit more time on this fix. I realized there are two more\nplaces that need fixes.\n\nFirstly, the placeholder tuple needs to be marked as \"empty\" too, so\nthat it can be correctly updated by other backends etc.\n\nSecondly, union_tuples had a couple bugs in handling empty ranges (this\nis related to the placeholder tuple changes). I wonder what's the best\nway to test this in an automated way - it's very dependent on timing of\nthe concurrent updated. For example we need to do something like this:\n\n T1: run pg_summarize_range() until it inserts the placeholder tuple\n T2: do an insert into the page range (updates placeholder)\n T1: continue pg_summarize_range() to merge into the placeholder\n\nBut there are no convenient ways to do this, I think. I had to check the\nvarious cases using breakpoints in gdb etc.\n\nI'm not very happy with the union_tuples() changes - it's quite verbose,\nperhaps a bit too verbose. We have to check for empty ranges first, and\nthen various combinations of allnulls/hasnulls flags for both BRIN\ntuples. There are 9 combinations, and the current code just checks them\none by one - I was getting repeatedly confused by the original code, but\nmaybe it's too much.\n\nAs for the backpatch, I tried to keep it as close to the 14+ fixes as\npossible, but it effectively backports some of the 14+ BRIN changes. In\nparticular, 14+ moved most of the NULL-handling logic from opclasses to\nbrin.c, and I think it's reasonable to do that for the backbranches too.\n\nThe alternative is to apply the same fix to every BRIN_PROCNUM_UNION\nopclass procedure out there. I guess doing that for minmax+inclusion is\nnot a huge deal, but what about external opclasses? And without the fix\nthe indexes are effectively broken. Fixing this outside in brin.c (in\nthe union procedure) fixes this for every opclass procedure, without any\nactual limitation of functinality (14+ does that anyway).\n\nBut maybe someone thinks this is a bad idea and we should do something\nelse in the backbranches?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 24 Feb 2023 16:53:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Thanks for doing all this. (Do I understand correctly that this patch\nis not in the commitfest?)\n\nI think my mental model for this was that \"allnulls\" meant that either\nthere are no values for the column in question or that the values were\nall nulls (For minmax without NULL handling, which is where this all\nstarted, these two things are essentially the same: the range is not to\nbe returned. So this became a bug the instant I added handling for NULL\nvalues.) I failed to realize that these were two different things, and\nthis is likely the origin of all these troubles.\n\nWhat do you think of using the unused bit in BrinTuple->bt_info to\ndenote a range that contains no heap tuples? This also means we need it\nin BrinMemTuple, I think we can do this:\n\n@@ -44,6 +44,7 @@ typedef struct BrinValues\n typedef struct BrinMemTuple\n {\n \tbool\t\tbt_placeholder; /* this is a placeholder tuple */\n+\tbool\t\tbt_empty_range;\t/* range has no tuples */\n \tBlockNumber bt_blkno;\t\t/* heap blkno that the tuple is for */\n \tMemoryContext bt_context;\t/* memcxt holding the bt_columns values */\n \t/* output arrays for brin_deform_tuple: */\n@@ -69,7 +70,7 @@ typedef struct BrinTuple\n \t *\n \t * 7th (high) bit: has nulls\n \t * 6th bit: is placeholder tuple\n-\t * 5th bit: unused\n+\t * 5th bit: range has no tuples\n \t * 4-0 bit: offset of data\n \t * ---------------\n \t */\n@@ -82,7 +83,7 @@ typedef struct BrinTuple\n * bt_info manipulation macros\n */\n #define BRIN_OFFSET_MASK\t\t0x1F\n-/* bit 0x20 is not used at present */\n+#define BRIN_EMPTY_RANGE\t\t0x20\n #define BRIN_PLACEHOLDER_MASK\t0x40\n #define BRIN_NULLS_MASK\t\t\t0x80\n \n(Note that bt_empty_range uses a hole in the struct, so there's no ABI\nchange.)\n\nThis is BRIN-tuple-level, not column-level, so conceptually it seems\nmore appropriate. (In the case where both are empty in union_tuples, we\ncan return without entering the per-attribute loop at all, though I\nadmit it's not a very interesting case.) This approach avoids having to\ninvent the strange combination of all+has to mean empty.\n\n\nOn 2023-Feb-24, Tomas Vondra wrote:\n\n> I wonder what's the best\n> way to test this in an automated way - it's very dependent on timing of\n> the concurrent updated. For example we need to do something like this:\n> \n> T1: run pg_summarize_range() until it inserts the placeholder tuple\n> T2: do an insert into the page range (updates placeholder)\n> T1: continue pg_summarize_range() to merge into the placeholder\n> \n> But there are no convenient ways to do this, I think. I had to check the\n> various cases using breakpoints in gdb etc.\n\nYeah, I struggled with this during initial development but in the end\ndid nothing. I think we would need to introduce some new framework,\nperhaps Korotkov stop-events stuff at \nhttps://postgr.es/m/CAPpHfdsTeb+hBT5=qxghjNG_cHcJLDaNQ9sdy9vNwBF2E2PuZA@mail.gmail.com\nwhich seemed to me a good fit -- we would add a stop point after the\nplaceholder tuple is inserted.\n\n> I'm not very happy with the union_tuples() changes - it's quite verbose,\n> perhaps a bit too verbose. We have to check for empty ranges first, and\n> then various combinations of allnulls/hasnulls flags for both BRIN\n> tuples. There are 9 combinations, and the current code just checks them\n> one by one - I was getting repeatedly confused by the original code, but\n> maybe it's too much.\n\nI think it's okay. I tried to make it more compact (by saying \"these\ntwo combinations here are case 2, and these two other are case 4\", and\nkeeping each of the other combinations a separate case; so there are\nreally 7 cases). But that doesn't make it any easier to follow, on the\ncontrary it was more convoluted. I think a dozen extra lines of source\nis not a problem.\n\n> The alternative is to apply the same fix to every BRIN_PROCNUM_UNION\n> opclass procedure out there. I guess doing that for minmax+inclusion is\n> not a huge deal, but what about external opclasses? And without the fix\n> the indexes are effectively broken. Fixing this outside in brin.c (in\n> the union procedure) fixes this for every opclass procedure, without any\n> actual limitation of functinality (14+ does that anyway).\n\nAbout the hypothetical question, you could as well ask what about\nunicorns. I have never seen any hint that any external opclass exist.\nI am all for maintaining compatibility, but I think this concern is\noverblown for BRIN. Anyway, I think your proposed fix is better than\nchanging individual 'union' support procs, so it doesn't matter.\n\nAs far as I understood, you're now worried that there will be an\nincompatibility because we will fail to call the 'union' procedure in\ncases where we previously called it? In other words, you fear that some\nhypothetical opclass was handling the NULL values in some way that's\nincompatible with this? I haven't thought terribly hard about this, but\nI can't see a way for this to cause incompatibilities.\n\n> But maybe someone thinks this is a bad idea and we should do something\n> else in the backbranches?\n\nI think the new handling of NULLs in commit 72ccf55cb99c (\"Move IS [NOT]\nNULL handling from BRIN support functions\") is better than what was\nthere before, so I don't object to backpatching it now that we know it's\nnecessary to fix a bug, and also we have field experience that the\napproach is solid.\n\nThe attached patch is just a pointer to comments that I think need light\nedition. There's also a typo \"bot\" (for \"both\") in a comment that I\nthink would go away if you accept my suggestion to store 'empty' at the\ntuple level. Note that I worked with the REL_14_STABLE sources, because\nfor some reason I thought that that was the newest that needed\nbackpatching of 72ccf55cb99c, but now that I'm finishing this email I\nrealize that I should have used 13 instead /facepalm\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La persona que no quería pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)",
"msg_date": "Fri, 3 Mar 2023 11:32:19 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "\n\nOn 3/3/23 11:32, Alvaro Herrera wrote:\n> \n> Thanks for doing all this. (Do I understand correctly that this patch\n> is not in the commitfest?)\n> \n> I think my mental model for this was that \"allnulls\" meant that either\n> there are no values for the column in question or that the values were\n> all nulls (For minmax without NULL handling, which is where this all\n> started, these two things are essentially the same: the range is not to\n> be returned. So this became a bug the instant I added handling for NULL\n> values.) I failed to realize that these were two different things, and\n> this is likely the origin of all these troubles.\n> \n> What do you think of using the unused bit in BrinTuple->bt_info to\n> denote a range that contains no heap tuples? This also means we need it\n> in BrinMemTuple, I think we can do this:\n> \n> @@ -44,6 +44,7 @@ typedef struct BrinValues\n> typedef struct BrinMemTuple\n> {\n> \tbool\t\tbt_placeholder; /* this is a placeholder tuple */\n> +\tbool\t\tbt_empty_range;\t/* range has no tuples */\n> \tBlockNumber bt_blkno;\t\t/* heap blkno that the tuple is for */\n> \tMemoryContext bt_context;\t/* memcxt holding the bt_columns values */\n> \t/* output arrays for brin_deform_tuple: */\n> @@ -69,7 +70,7 @@ typedef struct BrinTuple\n> \t *\n> \t * 7th (high) bit: has nulls\n> \t * 6th bit: is placeholder tuple\n> -\t * 5th bit: unused\n> +\t * 5th bit: range has no tuples\n> \t * 4-0 bit: offset of data\n> \t * ---------------\n> \t */\n> @@ -82,7 +83,7 @@ typedef struct BrinTuple\n> * bt_info manipulation macros\n> */\n> #define BRIN_OFFSET_MASK\t\t0x1F\n> -/* bit 0x20 is not used at present */\n> +#define BRIN_EMPTY_RANGE\t\t0x20\n> #define BRIN_PLACEHOLDER_MASK\t0x40\n> #define BRIN_NULLS_MASK\t\t\t0x80\n> \n> (Note that bt_empty_range uses a hole in the struct, so there's no ABI\n> change.)\n> \n> This is BRIN-tuple-level, not column-level, so conceptually it seems\n> more appropriate. (In the case where both are empty in union_tuples, we\n> can return without entering the per-attribute loop at all, though I\n> admit it's not a very interesting case.) This approach avoids having to\n> invent the strange combination of all+has to mean empty.\n> \n\nOh, that's an interesting idea! I haven't realized there's an unused bit\nat the tuple level, and I absolutely agree it'd be a better match than\nhaving this in individual summaries (like now).\n\nIt'd mean we'd not have the option to fix this withing the opclasses,\nbecause we only pass them the BrinValue and not the tuple. But if you\nthink that's reasonable, that'd be OK.\n\nThe other thing I was unsure is if the bit could be set for any existing\ntuples, but AFAICS that shouldn't be possible - brin_form_tuple does\npalloc0, so it should be 0.\n\nI suspect doing this might make the patch quite a bit simpler, actually.\n\n> \n> On 2023-Feb-24, Tomas Vondra wrote:\n> \n>> I wonder what's the best\n>> way to test this in an automated way - it's very dependent on timing of\n>> the concurrent updated. For example we need to do something like this:\n>>\n>> T1: run pg_summarize_range() until it inserts the placeholder tuple\n>> T2: do an insert into the page range (updates placeholder)\n>> T1: continue pg_summarize_range() to merge into the placeholder\n>>\n>> But there are no convenient ways to do this, I think. I had to check the\n>> various cases using breakpoints in gdb etc.\n> \n> Yeah, I struggled with this during initial development but in the end\n> did nothing. I think we would need to introduce some new framework,\n> perhaps Korotkov stop-events stuff at \n> https://postgr.es/m/CAPpHfdsTeb+hBT5=qxghjNG_cHcJLDaNQ9sdy9vNwBF2E2PuZA@mail.gmail.com\n> which seemed to me a good fit -- we would add a stop point after the\n> placeholder tuple is inserted.\n> \n\nYeah, but we don't have that at the moment. I actually ended up adding a\ncouple sleeps during development, which allowed me to hit just the right\norder of operations - a poor-man's version of those stop-events. Did\nwork but hardly an acceptable approach.\n\n>> I'm not very happy with the union_tuples() changes - it's quite verbose,\n>> perhaps a bit too verbose. We have to check for empty ranges first, and\n>> then various combinations of allnulls/hasnulls flags for both BRIN\n>> tuples. There are 9 combinations, and the current code just checks them\n>> one by one - I was getting repeatedly confused by the original code, but\n>> maybe it's too much.\n> \n> I think it's okay. I tried to make it more compact (by saying \"these\n> two combinations here are case 2, and these two other are case 4\", and\n> keeping each of the other combinations a separate case; so there are\n> really 7 cases). But that doesn't make it any easier to follow, on the\n> contrary it was more convoluted. I think a dozen extra lines of source\n> is not a problem.\n> \n\nOK\n\n>> The alternative is to apply the same fix to every BRIN_PROCNUM_UNION\n>> opclass procedure out there. I guess doing that for minmax+inclusion is\n>> not a huge deal, but what about external opclasses? And without the fix\n>> the indexes are effectively broken. Fixing this outside in brin.c (in\n>> the union procedure) fixes this for every opclass procedure, without any\n>> actual limitation of functinality (14+ does that anyway).\n> \n> About the hypothetical question, you could as well ask what about\n> unicorns. I have never seen any hint that any external opclass exist.\n> I am all for maintaining compatibility, but I think this concern is\n> overblown for BRIN. Anyway, I think your proposed fix is better than\n> changing individual 'union' support procs, so it doesn't matter.\n> \n\nOK\n\n> As far as I understood, you're now worried that there will be an\n> incompatibility because we will fail to call the 'union' procedure in\n> cases where we previously called it? In other words, you fear that some\n> hypothetical opclass was handling the NULL values in some way that's\n> incompatible with this? I haven't thought terribly hard about this, but\n> I can't see a way for this to cause incompatibilities.\n> \n\nYeah, the possible incompatibility is one concern - I have a hard time\nimagining such an opclass, because it'd have to handle NULLs in some\nstrange way. But and as you noted, we're not aware of any external BRIN\nopclasses, so maybe this is OK.\n\nThe other concern is more generic - as I mentioned, moving the NULL\nhandling from opclasses to brin.c is what we did in PG14, so this feels\na bit like a backport, and I dislike that a little bit.\n\n>> But maybe someone thinks this is a bad idea and we should do something\n>> else in the backbranches?\n> \n> I think the new handling of NULLs in commit 72ccf55cb99c (\"Move IS [NOT]\n> NULL handling from BRIN support functions\") is better than what was\n> there before, so I don't object to backpatching it now that we know it's\n> necessary to fix a bug, and also we have field experience that the\n> approach is solid.\n> \n\nOK, good to hear.\n\n> The attached patch is just a pointer to comments that I think need light\n> edition. There's also a typo \"bot\" (for \"both\") in a comment that I\n> think would go away if you accept my suggestion to store 'empty' at the\n> tuple level. Note that I worked with the REL_14_STABLE sources, because\n> for some reason I thought that that was the newest that needed\n> backpatching of 72ccf55cb99c, but now that I'm finishing this email I\n> realize that I should have used 13 instead /facepalm\n> \n\nThanks. I'll try to rework the patches to use the bt_info unused bit,\nand report back in a week or two.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Mar 2023 13:14:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Hi,\n\nIt took me a while but I finally got back to reworking this to use the\nbt_info bit, as proposed by Alvaro. And it turned out to work great,\nbecause (a) it's a tuple-level flag, i.e. the right place, and (b) it\ndoes not overload existing flags.\n\nThis greatly simplified the code in add_values_to_range and (especially)\nunion_tuples, making it much easier to understand, I think.\n\nOne disadvantage is we are unable to see which ranges are empty in\ncurrent pageinspect, but 0002 addresses that by adding \"empty\" column to\nthe brin_page_items() output. That's a matter for master only, though.\nIt's a trivial patch and it makes it easier/possible to test this, so we\nshould consider to squeeze it into PG16.\n\nI did quite a bit of testing - the attached 0003 adds extra tests, but I\ndon't propose to get this committed as is - it's rather overkill. Maybe\nsome reduced version of it ...\n\nThe hardest thing to test is the union_tuples() part, as it requires\nconcurrent operations with \"correct\" timing. Easy to simulate by\nbreakpoints in GDB, not so much in plain regression/TAP tests.\n\nThere's also a stress tests, doing a lot of randomized summarizations,\netc. Without the fix this failed in maybe 30% of runs, now I did ~100\nruns without a single failure.\n\nI haven't done any backporting, but I think it should be simpler than\nwith the earlier approach. I wonder if we need to care about starting to\nuse the previously unused bit - I don't think so, in the worst case\nwe'll just ignore it, but maybe I'm missing something (e.g. when using\nphysical replication).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Mar 2023 16:30:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Hi,\n\nhere's an updated version of the patch, including a backport version. I\nended up making the code yet a bit closer to master by introducing\nadd_values_to_range(). The current pre-14 code has the loop adding data\nto the BRIN tuple in two places, but with the \"fixed\" logic handling\nNULLs and the empty_range flag the amount of duplicated code got too\nhigh, so this seem reasonable.\n\nBoth cases have a patch extending pageinspect to show the new flag, but\nobviously we should commit that only in master. In backbranches it's\nmeant only to make testing easier.\n\nI plan to do a bit more testing, I'd welcome some feedback - it's a\nlong-standing bug, and it'd be good to finally get this fixed. I don't\nthink the patch can be made any simpler.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 23 Apr 2023 22:43:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 2023-Apr-23, Tomas Vondra wrote:\n\n> here's an updated version of the patch, including a backport version. I\n> ended up making the code yet a bit closer to master by introducing\n> add_values_to_range(). The current pre-14 code has the loop adding data\n> to the BRIN tuple in two places, but with the \"fixed\" logic handling\n> NULLs and the empty_range flag the amount of duplicated code got too\n> high, so this seem reasonable.\n\nIn backbranches, the new field to BrinMemTuple needs to be at the end of\nthe struct, to avoid ABI breakage.\n\nThere's a comment in add_values_to_range with a typo \"If the range was had\".\nAlso, \"So we should not see empty range that was not modified\" should\nperhaps be \"should not see an empty range\".\n\n(As for your FIXME comment in brin_memtuple_initialize, I think you're\ncorrect: we definitely need to reset bt_placeholder. Otherwise, we risk\nplaces that call it in a loop using a BrinMemTuple with one range with\nthe flag set, in a range where that doesn't hold. It might be\nimpossible for this to happen, given how narrow the conditions are on\nwhich bt_placeholder is used; but it seems safer to reset it anyway.)\n\nSome pgindent noise would be induced by this patch. I think it's worth\ncleaning up ahead of time.\n\nI did a quick experiment of turning the patches over -- applying test\nfirst, code fix after (requires some conflict fixing). With that I\nverified that the behavior of 'hasnulls' indeed changes with the code\nfix.\n\n> Both cases have a patch extending pageinspect to show the new flag, but\n> obviously we should commit that only in master. In backbranches it's\n> meant only to make testing easier.\n\nIn backbranches, I think it should be reasonably easy to add a\n--1.7--1.7.1.sql file and set the default version to 1.7.1; that then\nenables us to have the functionality (and the tests) in older branches\ntoo. If you just add a --1.X.1--1.12.sql version to each branch that's\nidentical to that branch's current pageinspect version upgrade script,\nthat would let us have working upgrade paths for all branches. This is\na bit laborious but straightforward enough.\n\nIf you don't feel like adding that, I volunteer to add the necessary\nscripts to all branches after you commit the bugfix, and ensure that all\nupgrade paths work correctly.\n\n> I plan to do a bit more testing, I'd welcome some feedback - it's a\n> long-standing bug, and it'd be good to finally get this fixed. I don't\n> think the patch can be made any simpler.\n\nThe approach looks good to me.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)\n\n\n",
"msg_date": "Mon, 24 Apr 2023 17:36:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2023-Apr-23, Tomas Vondra wrote:\n>> Both cases have a patch extending pageinspect to show the new flag, but\n>> obviously we should commit that only in master. In backbranches it's\n>> meant only to make testing easier.\n\n> In backbranches, I think it should be reasonably easy to add a\n> --1.7--1.7.1.sql file and set the default version to 1.7.1; that then\n> enables us to have the functionality (and the tests) in older branches\n> too. If you just add a --1.X.1--1.12.sql version to each branch that's\n> identical to that branch's current pageinspect version upgrade script,\n> that would let us have working upgrade paths for all branches. This is\n> a bit laborious but straightforward enough.\n\n\"A bit laborious\"? That seems enormously out of proportion to the\nbenefit of putting this test case into back branches. It will have\ncosts for end users too, not only us. As an example, it would\nunecessarily block some upgrade paths, if the upgraded-to installation\nis slightly older and lacks the necessary --1.X.1--1.12 script.\n\n> If you don't feel like adding that, I volunteer to add the necessary\n> scripts to all branches after you commit the bugfix, and ensure that all\n> upgrade paths work correctly.\n\nI do not think this should happen at all, whether you're willing to\ndo the work or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Apr 2023 11:46:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "\n\nOn 4/24/23 17:46, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2023-Apr-23, Tomas Vondra wrote:\n>>> Both cases have a patch extending pageinspect to show the new flag, but\n>>> obviously we should commit that only in master. In backbranches it's\n>>> meant only to make testing easier.\n> \n>> In backbranches, I think it should be reasonably easy to add a\n>> --1.7--1.7.1.sql file and set the default version to 1.7.1; that then\n>> enables us to have the functionality (and the tests) in older branches\n>> too. If you just add a --1.X.1--1.12.sql version to each branch that's\n>> identical to that branch's current pageinspect version upgrade script,\n>> that would let us have working upgrade paths for all branches. This is\n>> a bit laborious but straightforward enough.\n> \n> \"A bit laborious\"? That seems enormously out of proportion to the\n> benefit of putting this test case into back branches. It will have\n> costs for end users too, not only us. As an example, it would\n> unecessarily block some upgrade paths, if the upgraded-to installation\n> is slightly older and lacks the necessary --1.X.1--1.12 script.\n> \n\nWhy would that block the upgrade? Presumably we'd add two upgrade\nscripts in the master, to allow upgrade both from 1.X and 1.X.1.\n\n>> If you don't feel like adding that, I volunteer to add the necessary\n>> scripts to all branches after you commit the bugfix, and ensure that all\n>> upgrade paths work correctly.\n> \n> I do not think this should happen at all, whether you're willing to\n> do the work or not.\n\nFWIW I'm fine with not doing that. As mentioned, I only included this\npatch to make testing the patch easier (otherwise the flag is impossible\nto inspect directly).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Apr 2023 23:05:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 4/24/23 17:46, Tom Lane wrote:\n>> \"A bit laborious\"? That seems enormously out of proportion to the\n>> benefit of putting this test case into back branches. It will have\n>> costs for end users too, not only us. As an example, it would\n>> unecessarily block some upgrade paths, if the upgraded-to installation\n>> is slightly older and lacks the necessary --1.X.1--1.12 script.\n\n> Why would that block the upgrade? Presumably we'd add two upgrade\n> scripts in the master, to allow upgrade both from 1.X and 1.X.1.\n\nIt would for example block updating from 14.8 or later to 15.2, since\na 15.2 installation would not have the script to update from 1.X.1.\n\nYeah, people could work around that by only installing the latest\nversion, but there are plenty of real-world scenarios where you'd be\ncreating friction, or at least confusion. I do not think that this\ntest case is worth it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Apr 2023 17:10:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "\n\nOn 4/24/23 23:05, Tomas Vondra wrote:\n> \n> \n> On 4/24/23 17:46, Tom Lane wrote:\n>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>>> On 2023-Apr-23, Tomas Vondra wrote:\n>>>> Both cases have a patch extending pageinspect to show the new flag, but\n>>>> obviously we should commit that only in master. In backbranches it's\n>>>> meant only to make testing easier.\n>>\n>>> In backbranches, I think it should be reasonably easy to add a\n>>> --1.7--1.7.1.sql file and set the default version to 1.7.1; that then\n>>> enables us to have the functionality (and the tests) in older branches\n>>> too. If you just add a --1.X.1--1.12.sql version to each branch that's\n>>> identical to that branch's current pageinspect version upgrade script,\n>>> that would let us have working upgrade paths for all branches. This is\n>>> a bit laborious but straightforward enough.\n>>\n>> \"A bit laborious\"? That seems enormously out of proportion to the\n>> benefit of putting this test case into back branches. It will have\n>> costs for end users too, not only us. As an example, it would\n>> unecessarily block some upgrade paths, if the upgraded-to installation\n>> is slightly older and lacks the necessary --1.X.1--1.12 script.\n>>\n> \n> Why would that block the upgrade? Presumably we'd add two upgrade\n> scripts in the master, to allow upgrade both from 1.X and 1.X.1.\n> \n\nD'oh! I just realized I misunderstood the issue. Yes, if the target\nversion is missing the new script, that won't work. I'm not sure how\nlikely that is - in my experience people refresh versions at the same\ntime - but it's certainly an assumption we shouldn't do, I guess.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Apr 2023 23:10:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 4/24/23 17:36, Alvaro Herrera wrote:\n> On 2023-Apr-23, Tomas Vondra wrote:\n> \n>> here's an updated version of the patch, including a backport version. I\n>> ended up making the code yet a bit closer to master by introducing\n>> add_values_to_range(). The current pre-14 code has the loop adding data\n>> to the BRIN tuple in two places, but with the \"fixed\" logic handling\n>> NULLs and the empty_range flag the amount of duplicated code got too\n>> high, so this seem reasonable.\n> \n> In backbranches, the new field to BrinMemTuple needs to be at the end of\n> the struct, to avoid ABI breakage.\n> \n\nGood point.\n\n> There's a comment in add_values_to_range with a typo \"If the range was had\".\n> Also, \"So we should not see empty range that was not modified\" should\n> perhaps be \"should not see an empty range\".\n> \n\nOK, I'll check the wording of the comments.\n\n> (As for your FIXME comment in brin_memtuple_initialize, I think you're\n> correct: we definitely need to reset bt_placeholder. Otherwise, we risk\n> places that call it in a loop using a BrinMemTuple with one range with\n> the flag set, in a range where that doesn't hold. It might be\n> impossible for this to happen, given how narrow the conditions are on\n> which bt_placeholder is used; but it seems safer to reset it anyway.)\n> \n\nYeah. But isn't that a separate preexisting issue, strictly speaking?\n\n> Some pgindent noise would be induced by this patch. I think it's worth\n> cleaning up ahead of time.\n> \n\nTrue. Will do.\n\n> I did a quick experiment of turning the patches over -- applying test\n> first, code fix after (requires some conflict fixing). With that I\n> verified that the behavior of 'hasnulls' indeed changes with the code\n> fix.\n> \n\nThanks. Could you do some testing of the union_tuples stuff too? It's a\nbit tricky part - the behavior is timing sensitive, so testing it\nrequires gdb breakpoints breakpoints or something like that. I think\nit's correct, but it'd be nice to check that.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Apr 2023 23:20:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 2023-Apr-24, Tomas Vondra wrote:\n\n> On 4/24/23 17:36, Alvaro Herrera wrote:\n\n> > (As for your FIXME comment in brin_memtuple_initialize, I think you're\n> > correct: we definitely need to reset bt_placeholder. Otherwise, we risk\n> > places that call it in a loop using a BrinMemTuple with one range with\n> > the flag set, in a range where that doesn't hold. It might be\n> > impossible for this to happen, given how narrow the conditions are on\n> > which bt_placeholder is used; but it seems safer to reset it anyway.)\n> \n> Yeah. But isn't that a separate preexisting issue, strictly speaking?\n\nYes.\n\n> > I did a quick experiment of turning the patches over -- applying test\n> > first, code fix after (requires some conflict fixing). With that I\n> > verified that the behavior of 'hasnulls' indeed changes with the code\n> > fix.\n> \n> Thanks. Could you do some testing of the union_tuples stuff too? It's a\n> bit tricky part - the behavior is timing sensitive, so testing it\n> requires gdb breakpoints breakpoints or something like that. I think\n> it's correct, but it'd be nice to check that.\n\nI'll have a look.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Tue, 25 Apr 2023 11:20:40 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "\n\nOn 4/24/23 23:20, Tomas Vondra wrote:\n> On 4/24/23 17:36, Alvaro Herrera wrote:\n>> On 2023-Apr-23, Tomas Vondra wrote:\n>>\n>>> here's an updated version of the patch, including a backport version. I\n>>> ended up making the code yet a bit closer to master by introducing\n>>> add_values_to_range(). The current pre-14 code has the loop adding data\n>>> to the BRIN tuple in two places, but with the \"fixed\" logic handling\n>>> NULLs and the empty_range flag the amount of duplicated code got too\n>>> high, so this seem reasonable.\n>>\n>> In backbranches, the new field to BrinMemTuple needs to be at the end of\n>> the struct, to avoid ABI breakage.\n>>\n\nUnfortunately, this is not actually possible :-(\n\nThe BrinMemTuple has a FLEXIBLE_ARRAY_MEMBER at the end, so we can't\nplace anything after it. I think we have three options:\n\na) some other approach? - I really can't see any, except maybe for going\nback to the previous approach (i.e. encoding the info using the existing\nBrinValues allnulls/hasnulls flags)\n\nb) encoding the info in existing BrinMemTuple flags - e.g. we could use\nbt_placeholder to store two bits, not just one. Seems a bit ugly.\n\nc) ignore the issue - AFAICS this would be an issue only for (external)\ncode accessing BrinMemTuple structs, but I don't think we're aware of\nany out-of-core BRIN opclasses or anything like that ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 May 2023 00:13:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "Hi,\n\nOn Sun, May 07, 2023 at 12:13:07AM +0200, Tomas Vondra wrote:\n>\n> c) ignore the issue - AFAICS this would be an issue only for (external)\n> code accessing BrinMemTuple structs, but I don't think we're aware of\n> any out-of-core BRIN opclasses or anything like that ...\n\nFTR there's at least postgis that implments BRIN opclasses (for geometries and\ngeographies), but there's nothing fancy in the implementation and it doesn't\naccess BrinMemTuple struct.\n\n\n",
"msg_date": "Sun, 7 May 2023 13:08:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "\n\nOn 5/7/23 07:08, Julien Rouhaud wrote:\n> Hi,\n> \n> On Sun, May 07, 2023 at 12:13:07AM +0200, Tomas Vondra wrote:\n>>\n>> c) ignore the issue - AFAICS this would be an issue only for (external)\n>> code accessing BrinMemTuple structs, but I don't think we're aware of\n>> any out-of-core BRIN opclasses or anything like that ...\n> \n> FTR there's at least postgis that implments BRIN opclasses (for geometries and\n> geographies), but there's nothing fancy in the implementation and it doesn't\n> access BrinMemTuple struct.\n\nRight. I believe that should be fine, because opclasses don't access the\ntuple directly - instead we pass pointers to individual pieces. But\nmaybe it'd be a good idea to test this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 May 2023 14:50:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 2023-May-07, Tomas Vondra wrote:\n\n> > Álvaro wrote:\n> >> In backbranches, the new field to BrinMemTuple needs to be at the end of\n> >> the struct, to avoid ABI breakage.\n> \n> Unfortunately, this is not actually possible :-(\n> \n> The BrinMemTuple has a FLEXIBLE_ARRAY_MEMBER at the end, so we can't\n> place anything after it. I think we have three options:\n> \n> a) some other approach? - I really can't see any, except maybe for going\n> back to the previous approach (i.e. encoding the info using the existing\n> BrinValues allnulls/hasnulls flags)\n\nActually, mine was quite the stupid suggestion: the BrinMemTuple already\nhas a 3 byte hole in the place where you originally wanted to add the\nflag:\n\nstruct BrinMemTuple {\n _Bool bt_placeholder; /* 0 1 */\n\n /* XXX 3 bytes hole, try to pack */\n\n BlockNumber bt_blkno; /* 4 4 */\n MemoryContext bt_context; /* 8 8 */\n Datum * bt_values; /* 16 8 */\n _Bool * bt_allnulls; /* 24 8 */\n _Bool * bt_hasnulls; /* 32 8 */\n BrinValues bt_columns[]; /* 40 0 */\n\n /* size: 40, cachelines: 1, members: 7 */\n /* sum members: 37, holes: 1, sum holes: 3 */\n /* last cacheline: 40 bytes */\n};\n\nso putting it there was already not causing any ABI breakage. So, the\nsolution to this problem of not being able to put it at the end is just\nto return the struct to your original formulation.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n",
"msg_date": "Mon, 15 May 2023 12:06:07 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 5/15/23 12:06, Alvaro Herrera wrote:\n> On 2023-May-07, Tomas Vondra wrote:\n> \n>>> Álvaro wrote:\n>>>> In backbranches, the new field to BrinMemTuple needs to be at the end of\n>>>> the struct, to avoid ABI breakage.\n>>\n>> Unfortunately, this is not actually possible :-(\n>>\n>> The BrinMemTuple has a FLEXIBLE_ARRAY_MEMBER at the end, so we can't\n>> place anything after it. I think we have three options:\n>>\n>> a) some other approach? - I really can't see any, except maybe for going\n>> back to the previous approach (i.e. encoding the info using the existing\n>> BrinValues allnulls/hasnulls flags)\n> \n> Actually, mine was quite the stupid suggestion: the BrinMemTuple already\n> has a 3 byte hole in the place where you originally wanted to add the\n> flag:\n> \n> struct BrinMemTuple {\n> _Bool bt_placeholder; /* 0 1 */\n> \n> /* XXX 3 bytes hole, try to pack */\n> \n> BlockNumber bt_blkno; /* 4 4 */\n> MemoryContext bt_context; /* 8 8 */\n> Datum * bt_values; /* 16 8 */\n> _Bool * bt_allnulls; /* 24 8 */\n> _Bool * bt_hasnulls; /* 32 8 */\n> BrinValues bt_columns[]; /* 40 0 */\n> \n> /* size: 40, cachelines: 1, members: 7 */\n> /* sum members: 37, holes: 1, sum holes: 3 */\n> /* last cacheline: 40 bytes */\n> };\n> \n> so putting it there was already not causing any ABI breakage. So, the\n> solution to this problem of not being able to put it at the end is just\n> to return the struct to your original formulation.\n> \n\nThanks, that's pretty lucky. It means we're not breaking on-disk format\nnor ABI, which is great.\n\nAttached is a final version of the patches - I intend to do a bit more\ntesting, go through the comments once more, and get this committed today\nor perhaps tomorrow morning, so that it gets into beta1.\n\nUnfortunately, while polishing the patches, I realized union_tuples()\nhas yet another long-standing bug with handling NULL values, because it\ndoes this:\n\n /* Adjust \"hasnulls\". */\n if (!col_a->bv_hasnulls && col_b->bv_hasnulls)\n col_a->bv_hasnulls = true;\n\nbut let's assume \"col_a\" is a summary representing \"1\" and \"col_b\"\nrepresents NULL (col_b->bv_hasnulls=false col_b->bv_allnulls=true).\nWell, in that case we fail to \"remember\" col_a should represent NULL\nvalues too :-(\n\nThis is somewhat separate issue, because it's unrelated to empty ranges\n(neither of the two ranges is empty). It's hard to test it, because it\nrequires a particular timing of the concurrent actions, but a breakpoint\nin brin.c on the brin_can_do_samepage_update call (in summarize_range)\ndoes the trick for manual testing.\n\n0001 fixes the issue. 0002 is the original fix, and 0003 is just the\npageinspect changes (for master only).\n\nFor the backbranches, I thought about making the code more like master\n(by moving some of the handling from opclasses to brin.c), but decided\nnot to. It'd be low-risk, but it feels wrong to kinda do what the master\ndoes under \"oi_regular_nulls\" flag.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 18 May 2023 20:45:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
},
{
"msg_contents": "On 5/18/23 20:45, Tomas Vondra wrote:\n> ...\n>\n> 0001 fixes the issue. 0002 is the original fix, and 0003 is just the\n> pageinspect changes (for master only).\n> \n> For the backbranches, I thought about making the code more like master\n> (by moving some of the handling from opclasses to brin.c), but decided\n> not to. It'd be low-risk, but it feels wrong to kinda do what the master\n> does under \"oi_regular_nulls\" flag.\n> \n\nI've now pushed all these patches into relevant branches, after some\nminor last-minute tweaks, and so far it didn't cause any buildfarm\nissues. Assuming this fully fixes the NULL-handling for BRIN, this\nleaves just the deadlock issue discussed in [1].\n\nIt seems rather unfortunate all these issues went unnoticed / unreported\nessentially since BRIN was introduced in 9.5. To some extent it might be\nexplained by fairly low likelihood of actually hitting the issue (just\nthe right timing, concurrency with summarization, NULL values, ...). It\ntook me quite a bit of time and luck to (accidentally) hit these issues\nwhile stress testing the code.\n\nBut there's also the problem of writing tests for this kind of thing. To\nexercise the interesting parts (e.g. the union_tuples), it's necessary\nto coordinate the order of concurrent steps - but what's a good generic\nway to do that (which we could do in TAP tests)? In manual testing it's\ndoable by setting breakpoints on a particular lines, and step through\nthe concurrent processes that way.\n\nBut that doesn't seem like a particularly great solution for regression\ntests. I can imagine adding some sort of \"probes\" into the code and then\nattaching breakpoints to those, but surely we're not the first project\nneeding this ...\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/261e68bc-f5f5-5234-fb2c-af4f583513c0@enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 May 2023 03:04:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
}
] |
[
{
"msg_contents": "Hi all!\n\nI think I may have stumbled across a case of wrong results on HEAD (same\nthrough version 9.6, though interestingly 9.5 produces different results\naltogether).\n\ntest=# SELECT i AS ai1, i AS ai2 FROM generate_series(1,3)i GROUP BY\nai2, ROLLUP(ai1) ORDER BY ai1, ai2;\n\n ai1 | ai2\n-----+-----\n 1 | 1\n | 1\n 2 | 2\n | 2\n 3 | 3\n | 3\n(6 rows)\n\nI had expected:\n\n ai1 | ai2\n-----+-----\n 1 | 1\n 2 | 2\n 3 | 3\n | 1\n | 2\n | 3\n(6 rows)\n\nIt seems to me that the plan is missing a Sort node (on ai1 and ai2) above the\nAggregate node.\n\n QUERY PLAN\n------------------------------------------------\n GroupAggregate\n Group Key: i, i\n Group Key: i\n -> Sort\n Sort Key: i\n -> Function Scan on generate_series i\n\nI have a hunch part of the issue may be an assumption that a duplicate aliased\ncolumn will produce the same column values and hence isn't included in the\nrange table, nor subsequently the pathkeys. However, that assumption does not\nseem to be true for queries with multiple grouping set specifications:\n\ntest=# SELECT i as ai1, i as ai2 FROM (values (1),(2),(3)) v(i) GROUP\nBY ai1, ROLLUP(ai2);\n ai1 | ai2\n-----+-----\n 1 | 1\n 2 | 2\n 3 | 3\n 1 |\n 2 |\n 3 |\n(6 rows)\n\nIt seems to me that excluding the duplicate alias from the pathkeys is leading\nto a case where the group order is incorrectly determined to satisfy the sort\norder. Thus create_ordered_paths() decides against applying an explicit sort\nnode. But simply forcing an explicit sort still seems wrong since we've\neffectively lost a relevant column for the sort.\n\nI tinkered a bit and hacked together an admittedly ugly patch that triggers an\nexplicit sort constructed from the parse tree. An alternative approach I had\nconsidered was to update the rewriteHandler to explicitly force the existence of\nthe duplicate alias column in the range tables. But that also felt meh.\n\nDoes this seem like a legit issue? And if so, any thoughts on alternative\napproaches?\n\nThanks,\nDavid Kimura",
"msg_date": "Fri, 21 Oct 2022 10:11:38 -0700",
"msg_from": "David Kimura <david.g.kimura@gmail.com>",
"msg_from_op": true,
"msg_subject": "Multiple grouping set specs referencing duplicate alias"
},
{
"msg_contents": "David Kimura <david.g.kimura@gmail.com> writes:\n> I think I may have stumbled across a case of wrong results on HEAD (same\n> through version 9.6, though interestingly 9.5 produces different results\n> altogether).\n\n> test=# SELECT i AS ai1, i AS ai2 FROM generate_series(1,3)i GROUP BY\n> ai2, ROLLUP(ai1) ORDER BY ai1, ai2;\n\nYeah, this is an instance of an issue we've known about for awhile:\nwhen using grouping sets (ROLLUP), the planner fails to distinguish\nbetween \"ai1\" and \"ai1 as possibly nulled by the action of the\ngrouping node\". This has been discussed at, eg, [1] and [2].\nThe direction I'd like to take to fix it is to invent explicit\nlabeling of Vars that have been nulled by some operation such as\nouter joins or grouping, and then represent grouping set outputs\nas either PlaceHolderVars or Vars tied to a new RTE that represents\nthe grouping step. I have been working on a patch that'd do the\nfirst half of that [3], but it's been slow going, because we've\nindulged in a lot of semantic squishiness in this area and cleaning\nit all up is a large undertaking.\n\n> I tinkered a bit and hacked together an admittedly ugly patch that triggers an\n> explicit sort constructed from the parse tree.\n\nI seriously doubt that that'll fix all the issues in this area.\nWe really really need to understand that a PathKey based on\nthe scan-level value of a Var is different from a PathKey based\non a post-nulling-step value.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAMbWs48AtQTQGk37MSyDk_EAgDO3Y0iA_LzvuvGQ2uO_Wh2muw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/7dbdcf5c-b5a6-ef89-4958-da212fe10176%40iki.fi\n[3] https://www.postgresql.org/message-id/flat/830269.1656693747@sss.pgh.pa.us\n\n\n",
"msg_date": "Sun, 23 Oct 2022 19:49:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Multiple grouping set specs referencing duplicate alias"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nIn current master, as well as in REL_15_STABLE, installcheck in \ncontrib/citext fails in most locales, if we use ICU as a locale provider:\n\n$ rm -fr data; initdb --locale-provider icu --icu-locale en-US -D data \n&& pg_ctl -D data -l logfile start && make -C contrib/citext \ninstallcheck; pg_ctl -D data stop; cat contrib/citext/regression.diffs\n...\ntest citext ... ok 457 ms\ntest citext_utf8 ... FAILED 21 ms\n...\ndiff -u \n/home/ashutosh/pg/REL_15_STABLE/contrib/citext/expected/citext_utf8.out \n/home/ashutosh/pg/REL_15_STABLE/contrib/citext/results/citext_utf8.out\n--- \n/home/ashutosh/pg/REL_15_STABLE/contrib/citext/expected/citext_utf8.out \n 2022-07-14 17:45:31.747259743 +0300\n+++ \n/home/ashutosh/pg/REL_15_STABLE/contrib/citext/results/citext_utf8.out \n 2022-10-21 19:43:21.146044062 +0300\n@@ -54,7 +54,7 @@\n SELECT 'i'::citext = 'İ'::citext AS t;\n t\n ---\n- t\n+ f\n (1 row)\n\nThe reason is that in ICU lowercasing Unicode symbol \"İ\" (U+0130\n\"LATIN CAPITAL LETTER I WITH DOT ABOVE\") can give two valid results:\n- \"i\", i.e. \"U+0069 LATIN SMALL LETTER I\" in \"tr\" and \"az\" locales.\n- \"i̇\", i.e. \"U+0069 LATIN SMALL LETTER I\" followed by \"U+0307 COMBINING\n DOT ABOVE\" in all other locales I've tried (including \"en-US\", \"de\",\n \"ru\", etc).\nAnd the way this test is currently written only accepts plain latin \"i\", \nwhich might be true in glibc, but is not so in ICU. Verified on ICU \n70.1, but I've seen this on few other ICU versions as well, so I think \nthis is probably an ICU's feature, not a bug(?).\n\nSince we probably want installcheck in contrib/citext to pass on\ndatabases with various locales, including reasonable ICU-based ones,\nI suggest to fix this test by accepting either of outputs as valid.\n\nI can see two ways of doing that:\n1. change SQL in the test to use \"IN\" instead of \"=\";\n2. add an alternative output.\n\nI think in this case \"IN\" is better, because that allows a single \ncomment to address both possible outputs and to avoid unnecessary \nduplication.\n\nI've attached a patch authored mostly by my colleague, Roman Zharkov, as \none possible fix.\n\nOnly versions 15+ are affected.\n\nAny comments?\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru",
"msg_date": "Fri, 21 Oct 2022 20:23:33 +0300",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "patch suggestion: Fix citext_utf8 test's \"Turkish I\" with ICU\n collation provider"
}
] |
[
{
"msg_contents": "Hi, hackers\n\nI don't quite understand FullTransactionIdAdvance(), correct me if I’m wrong.\n\n\n/*\n * Advance a FullTransactionId variable, stepping over xids that would appear\n * to be special only when viewed as 32bit XIDs.\n */\nstatic inline void\nFullTransactionIdAdvance(FullTransactionId *dest)\n{\n\tdest->value++;\n\n\t/* see FullTransactionIdAdvance() */\n\tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n return;\n\n\twhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n dest->value++;\n}\n\n#define XidFromFullTransactionId(x) ((x).value)\n#define FullTransactionIdPrecedes(a, b)\t((a).value < (b).value)\n\nCan the codes reach line: while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)?\nAs we will return if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId)), and the two judgements seem equal.\nAnother confusion is the comments: /* see FullTransactionIdAdvance() */, is its own itself.\nCould anyone explain this? Thanks in advance.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi, hackers\n\nI don't quite understand FullTransactionIdAdvance(), correct me if I’m wrong.\n\n\n/*\n * Advance a FullTransactionId variable, stepping over xids that would appear\n * to be special only when viewed as 32bit XIDs.\n */\nstatic inline void\nFullTransactionIdAdvance(FullTransactionId *dest)\n{\n\tdest->value++;\n\n\t/* see FullTransactionIdAdvance() */\n\tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n return;\n\n\twhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n dest->value++;\n}\n\n#define XidFromFullTransactionId(x) ((x).value)\n#define FullTransactionIdPrecedes(a, b)\t((a).value < (b).value)\n\nCan the codes reach line: while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)?\nAs we will return if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId)), and the two judgements seem equal.\nAnother confusion is the comments: /* see FullTransactionIdAdvance() */, is its own itself.\nCould anyone explain this? Thanks in advance.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Sat, 22 Oct 2022 11:32:47 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "doubt about FullTransactionIdAdvance()"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-22 11:32:47 +0800, Zhang Mingli wrote:\n> Hi, hackers\n> \n> I don't quite understand FullTransactionIdAdvance(), correct me if I’m wrong.\n> \n> \n> /*\n> * Advance a FullTransactionId variable, stepping over xids that would appear\n> * to be special only when viewed as 32bit XIDs.\n> */\n> static inline void\n> FullTransactionIdAdvance(FullTransactionId *dest)\n> {\n> \tdest->value++;\n> \n> \t/* see FullTransactionIdAdvance() */\n> \tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n> return;\n> \n> \twhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n> dest->value++;\n> }\n> \n> #define XidFromFullTransactionId(x) ((x).value)\n> #define FullTransactionIdPrecedes(a, b)\t((a).value < (b).value)\n> \n> Can the codes reach line: while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)?\n> As we will return if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId)), and the two judgements seem equal.\n> Another confusion is the comments: /* see FullTransactionIdAdvance() */, is its own itself.\n> Could anyone explain this? Thanks in advance.\n\nFullTransactionId is 64bit. An \"old school\" xid is 32bit. The first branch is\nto protect against the special fxids that are actually below\nFirstNormalFullTransactionId:\n\n\tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n\t\treturn;\n\nThe second branch is to protect against 64bit xids that would yield a 32bit\nxid below FirstNormalTransactionId after truncating to 32bit:\n\n\twhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n\t\tdest->value++;\n\nE.g. we don't want to modify the 64bit xid 0 (meaning InvalidTransactionId) as\nit has special meaning. But we have to make sure that e.g. the 64bit xid\n0x100000000 won't exist, as it'd also result in InvalidTransactionId if\ntruncated to 32bit.\n\n\nHowever, it looks like this comment:\n\t/* see FullTransactionIdAdvance() */\n\tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n\t\treturn;\n\nis bogus, and it's my fault. Looks like it's intending to reference\nFullTransactionIdRetreat().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 23 Oct 2022 10:16:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: doubt about FullTransactionIdAdvance()"
},
{
"msg_contents": "Hi, Andres\n\nOn Oct 24, 2022, 01:16 +0800, Andres Freund <andres@anarazel.de>, wrote:\n> Hi,\n>\n> On 2022-10-22 11:32:47 +0800, Zhang Mingli wrote:\n> > Hi, hackers\n> >\n> > I don't quite understand FullTransactionIdAdvance(), correct me if I’m wrong.\n> >\n> >\n> > /*\n> > * Advance a FullTransactionId variable, stepping over xids that would appear\n> > * to be special only when viewed as 32bit XIDs.\n> > */\n> > static inline void\n> > FullTransactionIdAdvance(FullTransactionId *dest)\n> > {\n> > dest->value++;\n> >\n> > /* see FullTransactionIdAdvance() */\n> > if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n> > return;\n> >\n> > while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n> > dest->value++;\n> > }\n> >\n> > #define XidFromFullTransactionId(x) ((x).value)\n> > #define FullTransactionIdPrecedes(a, b) ((a).value < (b).value)\n> >\n> > Can the codes reach line: while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)?\n> > As we will return if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId)), and the two judgements seem equal.\n> > Another confusion is the comments: /* see FullTransactionIdAdvance() */, is its own itself.\n> > Could anyone explain this? Thanks in advance.\n>\n> FullTransactionId is 64bit. An \"old school\" xid is 32bit. The first branch is\n> to protect against the special fxids that are actually below\n> FirstNormalFullTransactionId:\n>\n> if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n> return;\n>\n> The second branch is to protect against 64bit xids that would yield a 32bit\n> xid below FirstNormalTransactionId after truncating to 32bit:\n>\n> while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n> dest->value++;\n>\n> E.g. we don't want to modify the 64bit xid 0 (meaning InvalidTransactionId) as\n> it has special meaning. But we have to make sure that e.g. the 64bit xid\n> 0x100000000 won't exist, as it'd also result in InvalidTransactionId if\n> truncated to 32bit.\n>\n>\n> However, it looks like this comment:\n> /* see FullTransactionIdAdvance() */\n> if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n> return;\n>\n> is bogus, and it's my fault. Looks like it's intending to reference\n> FullTransactionIdRetreat().\n>\n> Greetings,\n>\n> Andres Freund\nNow I get it, thanks for your explanation.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi, Andres\n\nOn Oct 24, 2022, 01:16 +0800, Andres Freund <andres@anarazel.de>, wrote:\nHi,\n\nOn 2022-10-22 11:32:47 +0800, Zhang Mingli wrote:\nHi, hackers\n\nI don't quite understand FullTransactionIdAdvance(), correct me if I’m wrong.\n\n\n/*\n * Advance a FullTransactionId variable, stepping over xids that would appear\n * to be special only when viewed as 32bit XIDs.\n */\nstatic inline void\nFullTransactionIdAdvance(FullTransactionId *dest)\n{\ndest->value++;\n\n/* see FullTransactionIdAdvance() */\nif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n return;\n\nwhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n dest->value++;\n}\n\n#define XidFromFullTransactionId(x) ((x).value)\n#define FullTransactionIdPrecedes(a, b) ((a).value < (b).value)\n\nCan the codes reach line: while (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)?\nAs we will return if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId)), and the two judgements seem equal.\nAnother confusion is the comments: /* see FullTransactionIdAdvance() */, is its own itself.\nCould anyone explain this? Thanks in advance.\n\nFullTransactionId is 64bit. An \"old school\" xid is 32bit. The first branch is\nto protect against the special fxids that are actually below\nFirstNormalFullTransactionId:\n\nif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\nreturn;\n\nThe second branch is to protect against 64bit xids that would yield a 32bit\nxid below FirstNormalTransactionId after truncating to 32bit:\n\nwhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\ndest->value++;\n\nE.g. we don't want to modify the 64bit xid 0 (meaning InvalidTransactionId) as\nit has special meaning. But we have to make sure that e.g. the 64bit xid\n0x100000000 won't exist, as it'd also result in InvalidTransactionId if\ntruncated to 32bit.\n\n\nHowever, it looks like this comment:\n/* see FullTransactionIdAdvance() */\nif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\nreturn;\n\nis bogus, and it's my fault. Looks like it's intending to reference\nFullTransactionIdRetreat().\n\nGreetings,\n\nAndres Freund\nNow I get it, thanks for your explanation.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Mon, 24 Oct 2022 11:29:54 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doubt about FullTransactionIdAdvance()"
}
] |
[
{
"msg_contents": "When the pg_dump 002_pg_dump.pl test generates the command to load the \nschema, it does\n\n # Add terminating semicolon\n $create_sql{$test_db} .= $tests{$test}->{create_sql} . \";\";\n\nIn some cases, this creates a duplicate semicolon, but more importantly, \nthis doesn't add any newline. So if you look at the result in either \nthe server log or in tmp_check/log/regress_log_002_pg_dump, it looks \nlike a complete mess. The attached patch makes the output look cleaner \nfor manual inspection: add semicolon only if necessary, and add two \nnewlines.",
"msg_date": "Sat, 22 Oct 2022 12:41:03 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_dump test: Make concatenated create_sql commands more readable"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nCan you please share some areas that would be good to start contributing?\n\nSome months ago I've got my first patch accept [1], and I'm looking to try to \nmake other contributions.\n\n\nThanks in advance!\n\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=6a1f082abac9da756d473e16238a906ca5a592dc\n\n--\nMatheus Alcantara\n\n\n",
"msg_date": "Sat, 22 Oct 2022 16:49:30 +0000",
"msg_from": "Matheus Alcantara <mths.dev@pm.me>",
"msg_from_op": true,
"msg_subject": "Interesting areas for beginners"
},
{
"msg_contents": "Hi Matheus,\n\n> Some months ago I've got my first patch accept [1], and I'm looking to try to\n> make other contributions.\n\nIn personal experience reviewing other people's code is a good\nstarting point. Firstly, IMO this is one of the most valuable\ncontributions, since the community is always short on reviewers.\nSecondly, in the process you will learn what the rest of the community\nis working on, which patches have good chances to be accepted, and\nlearn the implementation details of the system.\n\nAdditionally I would like to recommend the following materials for self-study:\n\n* https://www.amazon.com/Database-System-Concepts-Abraham-Silberschatz/dp/1260084507/\n** Especially the chapter available online about PostgreSQL\n* https://www.youtube.com/playlist?list=PLSE8ODhjZXjZaHA6QcxDfJ0SIWBzQFKEG\n* https://www.timescale.com/blog/how-and-why-to-become-a-postgresql-contributor/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 23 Oct 2022 13:23:56 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Interesting areas for beginners"
},
{
"msg_contents": "On Sun, Oct 23, 2022 at 6:24 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Matheus,\n>\n> > Some months ago I've got my first patch accept [1], and I'm looking to try to\n> > make other contributions.\n>\n> In personal experience reviewing other people's code is a good\n> starting point. Firstly, IMO this is one of the most valuable\n> contributions, since the community is always short on reviewers.\n> Secondly, in the process you will learn what the rest of the community\n> is working on, which patches have good chances to be accepted, and\n> learn the implementation details of the system.\n>\n> Additionally I would like to recommend the following materials for self-study:\n>\n> * https://www.amazon.com/Database-System-Concepts-Abraham-Silberschatz/dp/1260084507/\n> ** Especially the chapter available online about PostgreSQL\n> * https://www.youtube.com/playlist?list=PLSE8ODhjZXjZaHA6QcxDfJ0SIWBzQFKEG\n> * https://www.timescale.com/blog/how-and-why-to-become-a-postgresql-contributor/\n>\n\nI would second the recommendation to help with patch reviewing because\nit is one of the most valuable contributions you can make to the\nproject as well as a good way to start to build relationships with\nother contributors, which will be helpful the next time they are\ntearing apart one of your patches ;-)\n\nIn addition, two other resources to be aware of:\n* Paul Ramsey has a really nice write up on his thoughts on getting\nstarted hacking Postgres:\nhttp://blog.cleverelephant.ca/2022/10/postgresql-links.html\n\n* I suspect you may have seen these, but in case not, the wiki has\nseveral key pages to be aware of, which are linked to from\nhttps://wiki.postgresql.org/wiki/Development_information\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sun, 23 Oct 2022 16:28:13 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Interesting areas for beginners"
},
{
"msg_contents": "Thanks so much for the answers, I'll try to start looking at some patches.\n\n\n--\nMatheus Alcantara\n\n\n",
"msg_date": "Mon, 24 Oct 2022 22:13:55 +0000",
"msg_from": "Matheus Alcantara <mths.dev@pm.me>",
"msg_from_op": true,
"msg_subject": "Re: Interesting areas for beginners"
}
] |
[
{
"msg_contents": "Hi, Tomas:\nFor 0002-fixup-brin-has_nulls-20221022.patch :\n\n+ first_row = (bval->bv_hasnulls && bval->bv_allnulls);\n+\n+ if (bval->bv_hasnulls && bval->bv_allnulls)\n\nIt seems the if condition can be changed to `if (first_row)` which is more\nreadable.\n\nChhers\n\nHi, Tomas:For 0002-fixup-brin-has_nulls-20221022.patch :+ first_row = (bval->bv_hasnulls && bval->bv_allnulls);++ if (bval->bv_hasnulls && bval->bv_allnulls)It seems the if condition can be changed to `if (first_row)` which is more readable.Chhers",
"msg_date": "Sat, 22 Oct 2022 16:02:54 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing update of all_hasnulls in BRIN opclasses"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that there are several places where we use the spelling\n\"implementOr\" while the correct one seems to be \"implementEr\". Here is\nthe patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Sun, 23 Oct 2022 13:11:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Replace \"implementor\" with \"implementer\""
},
{
"msg_contents": "Hi hackers,\n\n> I noticed that there are several places where we use the spelling\n> \"implementOr\" while the correct one seems to be \"implementEr\". Here is\n> the patch.\n\nAfter a little more study I found evidence that both spellings can be\nacceptable [1]. As a non-native speaker I can't judge whether this is\ntrue or not and which spelling is preferable. I believe we should\nunify the spelling though. The reason why I initially thought\n\"implementOr\" is an incorrect spelling is because most spell-checking\ntools I personally use indicated so.\n\n[1] https://english.stackexchange.com/a/358111\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 23 Oct 2022 13:40:36 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Replace \"implementor\" with \"implementer\""
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> I noticed that there are several places where we use the spelling\n> \"implementOr\" while the correct one seems to be \"implementEr\". Here is\n> the patch.\n\nThey're both valid according to the dictionaries I looked\nat, eg [1]. I don't feel a need to change anything.\n\n\t\t\tregards, tom lane\n\n[1] https://www.merriam-webster.com/dictionary/implement\n\n\n",
"msg_date": "Sun, 23 Oct 2022 09:04:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Replace \"implementor\" with \"implementer\""
},
{
"msg_contents": "Hi Tom,\n\n> They're both valid according to the dictionaries I looked\n> at, eg [1]. I don't feel a need to change anything.\n\nOK, thanks. And sorry for the noise.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sun, 23 Oct 2022 16:07:16 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Replace \"implementor\" with \"implementer\""
}
] |
[
{
"msg_contents": "Hello hackers,\n\nThe `json_populate_recordset` and `json_agg` functions allow systems to\nprocess/generate json directly on the database. This \"cut outs the middle\ntier\"[1] and notably reduces the complexity of web applications.\n\nCSV processing is also a common use case and PostgreSQL has the COPY ..\nFROM .. CSV form but COPY is not compatible with libpq pipeline mode and\nthe interface is clunkier to use.\n\nI propose to include two new functions:\n\n- csv_populate_recordset ( base anyelement, from_csv text )\n- csv_agg ( anyelement )\n\nI would gladly implement these if it sounds like a good idea.\n\nI see there's already some code that deals with CSV on\n\n- src/backend/commands/copyfromparse.c(CopyReadAttributesCSV)\n- src/fe_utils/print.c(csv_print_field)\n- src/backend/utils/error/csvlog(write_csvlog)\n\nSo perhaps a new csv module could benefit the codebase as well.\n\nBest regards,\nSteve\n\n[1]: https://www.crunchydata.com/blog/generating-json-directly-from-postgres\n\nHello hackers,The `json_populate_recordset` and `json_agg` functions allow systems to process/generate json directly on the database. This \"cut outs the middle tier\"[1] and notably reduces the complexity of web applications. CSV processing is also a common use case and PostgreSQL has the COPY .. FROM .. CSV form but COPY is not compatible with libpq pipeline mode and the interface is clunkier to use.I propose to include two new functions:- csv_populate_recordset ( base anyelement, from_csv text )- csv_agg ( anyelement )I would gladly implement these if it sounds like a good idea.I see there's already some code that deals with CSV on- src/backend/commands/copyfromparse.c(CopyReadAttributesCSV)- src/fe_utils/print.c(csv_print_field) - src/backend/utils/error/csvlog(write_csvlog)So perhaps a new csv module could benefit the codebase as well. Best regards,Steve[1]: https://www.crunchydata.com/blog/generating-json-directly-from-postgres",
"msg_date": "Sun, 23 Oct 2022 20:50:11 -0500",
"msg_from": "Steve Chavez <steve@supabase.io>",
"msg_from_op": true,
"msg_subject": "csv_populate_recordset and csv_agg"
},
{
"msg_contents": "Steve Chavez <steve@supabase.io> writes:\n> CSV processing is also a common use case and PostgreSQL has the COPY ..\n> FROM .. CSV form but COPY is not compatible with libpq pipeline mode and\n> the interface is clunkier to use.\n\n> I propose to include two new functions:\n\n> - csv_populate_recordset ( base anyelement, from_csv text )\n> - csv_agg ( anyelement )\n\nThe trouble with CSV is there are so many mildly-incompatible\nversions of it. I'm okay with supporting it in COPY, where\nwe have the freedom to add random sub-options (QUOTE, ESCAPE,\nFORCE_QUOTE, yadda yadda) to cope with those variants.\nI don't see a nice way to handle that issue in the functions\nyou propose --- you'd have to assume that there is One True CSV,\nwhich sadly ain't so, or else complicate the functions beyond\nusability.\n\nAlso, in the end CSV is a surface presentation layer, and as\nsuch it's not terribly well suited as the calculation representation\nfor aggregates and other functions. I think these proposed functions\nwould have pretty terrible performance as a consequence of the\nneed to constantly re-parse the surface format. The same point\ncould be made about JSON ... which is why we prefer to implement\nprocessing functions with JSONB.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 Oct 2022 22:51:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: csv_populate_recordset and csv_agg"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nThe TransactionStateData has savepointLevel field, however, I do not understand\nwhat is savepoint level, it seems all savepoints have the same savepointLevel,\nI want to know how the savepoint level changes.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 24 Oct 2022 12:19:25 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Question about savepoint level?"
},
{
"msg_contents": "On Mon, 24 Oct 2022 at 12:19, Japin Li <japinli@hotmail.com> wrote:\n> Hi, hackers\n>\n> The TransactionStateData has savepointLevel field, however, I do not understand\n> what is savepoint level, it seems all savepoints have the same savepointLevel,\n> I want to know how the savepoint level changes.\n\nI try to remove the savepointLevel, and it seems harmless. Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Mon, 24 Oct 2022 14:59:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about savepoint level?"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 3:00 PM Japin Li <japinli@hotmail.com> wrote:\n\n> On Mon, 24 Oct 2022 at 12:19, Japin Li <japinli@hotmail.com> wrote:\n> > The TransactionStateData has savepointLevel field, however, I do not\n> understand\n> > what is savepoint level, it seems all savepoints have the same\n> savepointLevel,\n> > I want to know how the savepoint level changes.\n>\n> I try to remove the savepointLevel, and it seems harmless. Any thoughts?\n\n\nISTM the savepointLevel always remains the same as what is in\nTopTransactionStateData after looking at the codes. Now I also get\nconfused. Maybe what we want is nestingLevel?\n\nThanks\nRichard\n\nOn Mon, Oct 24, 2022 at 3:00 PM Japin Li <japinli@hotmail.com> wrote:\nOn Mon, 24 Oct 2022 at 12:19, Japin Li <japinli@hotmail.com> wrote:\n> The TransactionStateData has savepointLevel field, however, I do not understand\n> what is savepoint level, it seems all savepoints have the same savepointLevel,\n> I want to know how the savepoint level changes.\n\nI try to remove the savepointLevel, and it seems harmless. Any thoughts? ISTM the savepointLevel always remains the same as what is inTopTransactionStateData after looking at the codes. Now I also getconfused. Maybe what we want is nestingLevel?ThanksRichard",
"msg_date": "Mon, 24 Oct 2022 17:32:35 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about savepoint level?"
},
{
"msg_contents": "On 2022-Oct-24, Richard Guo wrote:\n\n> On Mon, Oct 24, 2022 at 3:00 PM Japin Li <japinli@hotmail.com> wrote:\n> \n> > I try to remove the savepointLevel, and it seems harmless. Any thoughts?\n> \n> ISTM the savepointLevel always remains the same as what is in\n> TopTransactionStateData after looking at the codes. Now I also get\n> confused. Maybe what we want is nestingLevel?\n\nThis has already been discussed:\nhttps://postgr.es/m/1317297307-sup-7945@alvh.no-ip.org\nNow that we have transaction-controlling procedures, I think the next\nstep is to add the SQL-standard feature that allows savepoint level\ncontrol for them, which would make the savepointLevel no longer dead\ncode.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n",
"msg_date": "Mon, 24 Oct 2022 11:56:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Question about savepoint level?"
},
{
"msg_contents": "\nOn Mon, 24 Oct 2022 at 17:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> This has already been discussed:\n> https://postgr.es/m/1317297307-sup-7945@alvh.no-ip.org\n\nSorry for my lazy search.\n\n> Now that we have transaction-controlling procedures, I think the next\n> step is to add the SQL-standard feature that allows savepoint level\n> control for them, which would make the savepointLevel no longer dead\n> code.\n\nSo the savepoint level is used for CREATE PROCEDURE ... OLD/NEW SAVEPOINT LEVEL\nsyntax [1], right?\n\n[1] https://www.ibm.com/docs/en/db2/10.1.0?topic=statements-create-procedure-sql\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 24 Oct 2022 18:33:40 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about savepoint level?"
},
{
"msg_contents": "On 2022-Oct-24, Japin Li wrote:\n\n> On Mon, 24 Oct 2022 at 17:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Now that we have transaction-controlling procedures, I think the next\n> > step is to add the SQL-standard feature that allows savepoint level\n> > control for them, which would make the savepointLevel no longer dead\n> > code.\n> \n> So the savepoint level is used for CREATE PROCEDURE ... OLD/NEW SAVEPOINT LEVEL\n> syntax [1], right?\n> \n> [1] https://www.ibm.com/docs/en/db2/10.1.0?topic=statements-create-procedure-sql\n\nYeah, that's what I understand. The default behavior is the current\nbehavior (OLD SAVEPOINT LEVEL). In a procedure that specifies NEW\nSAVEPOINT LEVEL trying to rollback a savepoint that was defined before\nthe procedure was called is an error, which sounds a useful protection.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)\n\n\n",
"msg_date": "Mon, 24 Oct 2022 13:03:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Question about savepoint level?"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 6:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Oct-24, Richard Guo wrote:\n> > ISTM the savepointLevel always remains the same as what is in\n> > TopTransactionStateData after looking at the codes. Now I also get\n> > confused. Maybe what we want is nestingLevel?\n>\n> This has already been discussed:\n> https://postgr.es/m/1317297307-sup-7945@alvh.no-ip.org\n> Now that we have transaction-controlling procedures, I think the next\n> step is to add the SQL-standard feature that allows savepoint level\n> control for them, which would make the savepointLevel no longer dead\n> code.\n\n\nNow I see the context. Thanks for pointing that out.\n\nThanks\nRichard\n\nOn Mon, Oct 24, 2022 at 6:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Oct-24, Richard Guo wrote:\n> ISTM the savepointLevel always remains the same as what is in\n> TopTransactionStateData after looking at the codes. Now I also get\n> confused. Maybe what we want is nestingLevel?\n\nThis has already been discussed:\nhttps://postgr.es/m/1317297307-sup-7945@alvh.no-ip.org\nNow that we have transaction-controlling procedures, I think the next\nstep is to add the SQL-standard feature that allows savepoint level\ncontrol for them, which would make the savepointLevel no longer dead\ncode. Now I see the context. Thanks for pointing that out.ThanksRichard",
"msg_date": "Tue, 25 Oct 2022 10:13:44 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about savepoint level?"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nThere is a docs Logical Replication section \"31.10 Configuration\nSettings\" [1] which describes some logical replication GUCs, and\ndetails on how they interact with each other and how to take that into\naccount when setting their values.\n\nThere is another docs Server Configuration section \"20.6 Replication\"\n[2] which lists the replication-related GUC parameters, and what they\nare for.\n\nCurrently AFAIK those two pages are unconnected, but I felt it might\nbe helpful if some of the parameters in the list [2] had xref links to\nthe additional logical replication configuration information [1]. PSA\na patch to do that.\n\n~~\n\nMeanwhile, I also suspect that the main blurb top of [1] is not\nentirely correct... it says \"These settings control the behaviour of\nthe built-in streaming replication feature\", although some of the GUCs\nmentioned later in this section are clearly for \"logical replication\".\n\nThoughts?\n\n------\n[1] 31.10 Configuration Settings -\nhttps://www.postgresql.org/docs/current/logical-replication-config.html\n[2] 20.6 Replication -\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 24 Oct 2022 18:44:54 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Mon, 24 Oct 2022 at 13:15, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi hackers.\n>\n> There is a docs Logical Replication section \"31.10 Configuration\n> Settings\" [1] which describes some logical replication GUCs, and\n> details on how they interact with each other and how to take that into\n> account when setting their values.\n>\n> There is another docs Server Configuration section \"20.6 Replication\"\n> [2] which lists the replication-related GUC parameters, and what they\n> are for.\n>\n> Currently AFAIK those two pages are unconnected, but I felt it might\n> be helpful if some of the parameters in the list [2] had xref links to\n> the additional logical replication configuration information [1]. PSA\n> a patch to do that.\n>\n> ~~\n>\n> Meanwhile, I also suspect that the main blurb top of [1] is not\n> entirely correct... it says \"These settings control the behaviour of\n> the built-in streaming replication feature\", although some of the GUCs\n> mentioned later in this section are clearly for \"logical replication\".\n\nThe introduction mainly talks about streaming replication and the page\n[1] subsection \"Subscribers\" clearly mentions that these\nconfigurations are for logical replication. As we already have a\nseparate page [2] to detail about logical replication configurations,\nit might be better to move the \"subscribers\" section from [1] to [2].\n\n[1] 20.6 Replication -\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html\n[2] 31.10 Configuration Settings -\nhttps://www.postgresql.org/docs/current/logical-replication-config.html\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 13 Nov 2022 06:17:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Sun, Nov 13, 2022 at 11:47 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 24 Oct 2022 at 13:15, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi hackers.\n> >\n> > There is a docs Logical Replication section \"31.10 Configuration\n> > Settings\" [1] which describes some logical replication GUCs, and\n> > details on how they interact with each other and how to take that into\n> > account when setting their values.\n> >\n> > There is another docs Server Configuration section \"20.6 Replication\"\n> > [2] which lists the replication-related GUC parameters, and what they\n> > are for.\n> >\n> > Currently AFAIK those two pages are unconnected, but I felt it might\n> > be helpful if some of the parameters in the list [2] had xref links to\n> > the additional logical replication configuration information [1]. PSA\n> > a patch to do that.\n> >\n> > ~~\n> >\n> > Meanwhile, I also suspect that the main blurb top of [1] is not\n> > entirely correct... it says \"These settings control the behaviour of\n> > the built-in streaming replication feature\", although some of the GUCs\n> > mentioned later in this section are clearly for \"logical replication\".\n>\n> The introduction mainly talks about streaming replication and the page\n> [1] subsection \"Subscribers\" clearly mentions that these\n> configurations are for logical replication. As we already have a\n> separate page [2] to detail about logical replication configurations,\n> it might be better to move the \"subscribers\" section from [1] to [2].\n>\n> [1] 20.6 Replication -\n> https://www.postgresql.org/docs/current/runtime-config-replication.html\n> [2] 31.10 Configuration Settings -\n> https://www.postgresql.org/docs/current/logical-replication-config.html\n>\n\nThanks, Vignesh. Your suggestion (to move that \"Subscribers\" section)\nseemed like a good idea to me, so PSA my patch v2 to implement that.\n\nNow, on the Streaming Replication page\n- the blurb has a reference to information about logical replication config\n- the \"Subscribers\" section was relocated to the other page\n\nNow, on the Logical Replication \"Configuration Settings\" page\n- there are new subsections for \"Publishers\", \"Subscribers\" (copied), \"Notes\"\n- some wording is rearranged but the content is basically the same as before\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 15 Nov 2022 16:47:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Tue, 15 Nov 2022 at 11:17, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, Nov 13, 2022 at 11:47 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, 24 Oct 2022 at 13:15, Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi hackers.\n> > >\n> > > There is a docs Logical Replication section \"31.10 Configuration\n> > > Settings\" [1] which describes some logical replication GUCs, and\n> > > details on how they interact with each other and how to take that into\n> > > account when setting their values.\n> > >\n> > > There is another docs Server Configuration section \"20.6 Replication\"\n> > > [2] which lists the replication-related GUC parameters, and what they\n> > > are for.\n> > >\n> > > Currently AFAIK those two pages are unconnected, but I felt it might\n> > > be helpful if some of the parameters in the list [2] had xref links to\n> > > the additional logical replication configuration information [1]. PSA\n> > > a patch to do that.\n> > >\n> > > ~~\n> > >\n> > > Meanwhile, I also suspect that the main blurb top of [1] is not\n> > > entirely correct... it says \"These settings control the behaviour of\n> > > the built-in streaming replication feature\", although some of the GUCs\n> > > mentioned later in this section are clearly for \"logical replication\".\n> >\n> > The introduction mainly talks about streaming replication and the page\n> > [1] subsection \"Subscribers\" clearly mentions that these\n> > configurations are for logical replication. As we already have a\n> > separate page [2] to detail about logical replication configurations,\n> > it might be better to move the \"subscribers\" section from [1] to [2].\n> >\n> > [1] 20.6 Replication -\n> > https://www.postgresql.org/docs/current/runtime-config-replication.html\n> > [2] 31.10 Configuration Settings -\n> > https://www.postgresql.org/docs/current/logical-replication-config.html\n> >\n>\n> Thanks, Vignesh. Your suggestion (to move that \"Subscribers\" section)\n> seemed like a good idea to me, so PSA my patch v2 to implement that.\n>\n> Now, on the Streaming Replication page\n> - the blurb has a reference to information about logical replication config\n> - the \"Subscribers\" section was relocated to the other page\n>\n> Now, on the Logical Replication \"Configuration Settings\" page\n> - there are new subsections for \"Publishers\", \"Subscribers\" (copied), \"Notes\"\n> - some wording is rearranged but the content is basically the same as before\n\nOne suggestion:\nThe format of subscribers includes the data type and default values,\nthe format of publishers does not include data type and default\nvalues. We can try to maintain the consistency for both publisher and\nsubscriber configurations.\n+ <para>\n+ <varname>wal_level</varname> must be set to <literal>logical</literal>.\n+ </para>\n\n+ <term><varname>max_logical_replication_workers</varname>\n(<type>integer</type>)\n+ <indexterm>\n+ <primary><varname>max_logical_replication_workers</varname>\nconfiguration parameter</primary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Specifies maximum number of logical replication workers. This\nmust be set\n+ to at least the number of subscriptions (for apply workers), plus some\n+ reserve for the table synchronization workers.\n+ </para>\n+ <para>\n\nIf we don't want to keep the same format, we could give a link to\nruntime-config-replication where data type and default is defined for\npublisher configurations max_replication_slots and max_wal_senders.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 16 Nov 2022 16:54:34 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 10:24 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n...\n\n> One suggestion:\n> The format of subscribers includes the data type and default values,\n> the format of publishers does not include data type and default\n> values. We can try to maintain the consistency for both publisher and\n> subscriber configurations.\n> + <para>\n> + <varname>wal_level</varname> must be set to <literal>logical</literal>.\n> + </para>\n>\n> + <term><varname>max_logical_replication_workers</varname>\n> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>max_logical_replication_workers</varname>\n> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies maximum number of logical replication workers. This\n> must be set\n> + to at least the number of subscriptions (for apply workers), plus some\n> + reserve for the table synchronization workers.\n> + </para>\n> + <para>\n>\n> If we don't want to keep the same format, we could give a link to\n> runtime-config-replication where data type and default is defined for\n> publisher configurations max_replication_slots and max_wal_senders.\n>\n\nThanks for your suggestions.\n\nI have included xref links to the original definitions, rather than\ndefining the same GUC in multiple places.\n\nPSA v3.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 23 Nov 2022 09:16:20 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 9:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 10:24 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> ...\n>\n> > One suggestion:\n> > The format of subscribers includes the data type and default values,\n> > the format of publishers does not include data type and default\n> > values. We can try to maintain the consistency for both publisher and\n> > subscriber configurations.\n> > + <para>\n> > + <varname>wal_level</varname> must be set to <literal>logical</literal>.\n> > + </para>\n> >\n> > + <term><varname>max_logical_replication_workers</varname>\n> > (<type>integer</type>)\n> > + <indexterm>\n> > + <primary><varname>max_logical_replication_workers</varname>\n> > configuration parameter</primary>\n> > + </indexterm>\n> > + </term>\n> > + <listitem>\n> > + <para>\n> > + Specifies maximum number of logical replication workers. This\n> > must be set\n> > + to at least the number of subscriptions (for apply workers), plus some\n> > + reserve for the table synchronization workers.\n> > + </para>\n> > + <para>\n> >\n> > If we don't want to keep the same format, we could give a link to\n> > runtime-config-replication where data type and default is defined for\n> > publisher configurations max_replication_slots and max_wal_senders.\n> >\n>\n> Thanks for your suggestions.\n>\n> I have included xref links to the original definitions, rather than\n> defining the same GUC in multiple places.\n>\n> PSA v3.\n>\n\nI updated the patch. The content is unchanged from v3 but the links\nare modified so now they render with the correct <varname> format for\nthe GUC names.\n\nPSA v4.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 24 Nov 2022 10:43:39 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "Your patch moves the description of the subscriber-related configuration \nparameters from config.sgml to logical-replication.sgml. But \nconfig.sgml is supposed to contain *all* configuration parameters. If \nwe're going to start splitting this up and moving things around then \nwe'd need a more comprehensive plan than this individual patch. (I'm \nnot suggesting that we actually do this.)\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 11:22:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 9:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Your patch moves the description of the subscriber-related configuration\n> parameters from config.sgml to logical-replication.sgml. But\n> config.sgml is supposed to contain *all* configuration parameters. If\n> we're going to start splitting this up and moving things around then\n> we'd need a more comprehensive plan than this individual patch. (I'm\n> not suggesting that we actually do this.)\n>\n\nOK, thanks for the information.\n\nThis v5 patch now only adds some previously missing cross-references\nand tidies the Chapter 31.10 \"Configuration Settings\" section.\nMeanwhile, the Subscriber GUC descriptions are left on the\nconfig.sgml, where you said they are supposed to be.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 29 Nov 2022 13:39:21 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "Hi,\n\nOn Mon, Oct 24, 2022 at 12:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> Hi hackers.\n>\n> There is a docs Logical Replication section \"31.10 Configuration\n> Settings\" [1] which describes some logical replication GUCs, and\n> details on how they interact with each other and how to take that into\n> account when setting their values.\n>\n> There is another docs Server Configuration section \"20.6 Replication\"\n> [2] which lists the replication-related GUC parameters, and what they\n> are for.\n>\n> Currently AFAIK those two pages are unconnected, but I felt it might\n> be helpful if some of the parameters in the list [2] had xref links to\n> the additional logical replication configuration information [1]. PSA\n> a patch to do that.\n>\n\n+1 on the patch. Some feedback on v5 below.\n\n> + <para>\n> + For <firstterm>logical replication</firstterm> configuration\nsettings refer\n> + also to <xref linkend=\"logical-replication-config\"/>.\n> + </para>\n> +\n\nI feel the top paragraph needs to explain terminology for logical\nreplication like it does for physical replication in addition to linking to\nthe logical replication config page. I'm recommending this as we use terms\nlike subscriber etc. in description of parameters without introducing them\nfirst.\n\nAs an example, something like below might work.\n\nThese settings control the behavior of the built-in streaming replication\nfeature (see Section 27.2.5) and logical replication (link).\n\nFor physical replication, servers will be either a primary or a standby\nserver. Primaries can send data, while standbys are always receivers of\nreplicated data. When cascading replication (see Section 27.2.7) is used,\nstandby servers can also be senders, as well as receivers. Parameters are\nmainly for sending and standby servers, though some parameters have meaning\nonly on the primary server. Settings may vary across the cluster without\nproblems if that is required.\n\nFor logical replication, servers will either be publishers (also called\nsenders in the sections below) or subscribers. Publishers are ....,\nSubscribers are...\n\n> + <para>\n> + See <xref linkend=\"logical-replication-config\"/> for more\ndetails\n> + about setting <varname>max_replication_slots</varname> for\nlogical\n> + replication.\n> + </para>\n\n\nThe link doesn't add any new information regarding max_replication_slots\nother than \"to reserve some for table sync\" and has a good amount of\nunrelated info. I think it might be useful to just put a line here asking\nto reserve some for table sync instead of linking to the entire logical\nreplication config section.\n\n> - Logical replication requires several configuration options to be set.\n> + Logical replication requires several configuration parameters to be\nset.\n\nMay not be needed? The docs have references to both options and parameters\nbut I don't feel strongly about it. Feel free to use what you prefer.\n\nI think we should add an additional line to the intro here saying that\nparameters are mostly relevant only one of the subscriber or publisher.\nMaybe a better written version of \"While max_replication_slots means\ndifferent things on the publisher and subscriber, all other parameters are\nrelevant only on either the publisher or the subscriber.\"\n\n> + <sect2 id=\"logical-replication-config-notes\">\n> + <title>Notes</title>\n\nI don't think we need this sub-section. If I understand correctly, these\nparameters are effective only on the subscriber side. So, any reason to not\ninclude them in that section?\n\n> +\n> + <para>\n> + Logical replication workers are also affected by\n> + <link\nlinkend=\"guc-wal-receiver-timeout\"><varname>wal_receiver_timeout</varname></link>,\n> + <link\nlinkend=\"guc-wal-receiver-status-interval\"><varname>wal_receiver_status_interval</varname></link>\nand\n> + <link\nlinkend=\"guc-wal-retrieve-retry-interval\"><varname>wal_receiver_retry_interval</varname></link>.\n> + </para>\n> +\n\nI like moving this; it makes more sense here. Should we remove it from\nconfig.sgml? It seems a bit out of place there as we generally talk only\nabout individual parameters there and this line is general logical\nreplication subscriber advise which is more suited to\nlogical-replication.sgml\n\n> + <para>\n> + Configuration parameter\n> + <link\nlinkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> + may need to be adjusted to accommodate for replication workers, at\nleast (\n> + <link\nlinkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> + + <literal>1</literal>). Some extensions and parallel queries also\ntake\n> + worker slots from <varname>max_worker_processes</varname>.\n> + </para>\n> +\n> + </sect2>\n\nI think we should move this to the subscriber section as said above. It's\nuseful to know this and people might skip over the notes.\n\n\n> ~~\n>\n> Meanwhile, I also suspect that the main blurb top of [1] is not\n> entirely correct... it says \"These settings control the behaviour of\n> the built-in streaming replication feature\", although some of the GUCs\n> mentioned later in this section are clearly for \"logical replication\".\n>\n\n> Thoughts?\n>\n\nI shared an idea above.\n\nRegards,\nSamay\n\n\n>\n> ------\n> [1] 31.10 Configuration Settings -\n> https://www.postgresql.org/docs/current/logical-replication-config.html\n> [2] 20.6 Replication -\n> https://www.postgresql.org/docs/current/runtime-config-replication.html\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n\nHi,On Mon, Oct 24, 2022 at 12:45 AM Peter Smith <smithpb2250@gmail.com> wrote:Hi hackers.\n\nThere is a docs Logical Replication section \"31.10 Configuration\nSettings\" [1] which describes some logical replication GUCs, and\ndetails on how they interact with each other and how to take that into\naccount when setting their values.\n\nThere is another docs Server Configuration section \"20.6 Replication\"\n[2] which lists the replication-related GUC parameters, and what they\nare for.\n\nCurrently AFAIK those two pages are unconnected, but I felt it might\nbe helpful if some of the parameters in the list [2] had xref links to\nthe additional logical replication configuration information [1]. PSA\na patch to do that.+1 on the patch. Some feedback on v5 below.> + <para>> + For <firstterm>logical replication</firstterm> configuration settings refer> + also to <xref linkend=\"logical-replication-config\"/>.> + </para>> +I feel the top paragraph needs to explain terminology for logical replication like it does for physical replication in addition to linking to the logical replication config page. I'm recommending this as we use terms like subscriber etc. in description of parameters without introducing them first.As an example, something like below might work.These settings control the behavior of the built-in streaming replication feature (see Section 27.2.5) and logical replication (link).For physical replication, servers will be either a primary or a standby server. Primaries can send data, while standbys are always receivers of replicated data. When cascading replication (see Section 27.2.7) is used, standby servers can also be senders, as well as receivers. Parameters are mainly for sending and standby servers, though some parameters have meaning only on the primary server. Settings may vary across the cluster without problems if that is required.For logical replication, servers will either be publishers (also called senders in the sections below) or subscribers. Publishers are ...., Subscribers are...> + <para>> + See <xref linkend=\"logical-replication-config\"/> for more details> + about setting <varname>max_replication_slots</varname> for logical> + replication.> + </para>The link doesn't add any new information regarding max_replication_slots other than \"to reserve some for table sync\" and has a good amount of unrelated info. I think it might be useful to just put a line here asking to reserve some for table sync instead of linking to the entire logical replication config section. > - Logical replication requires several configuration options to be set.> + Logical replication requires several configuration parameters to be set.May not be needed? The docs have references to both options and parameters but I don't feel strongly about it. Feel free to use what you prefer.I think we should add an additional line to the intro here saying that parameters are mostly relevant only one of the subscriber or publisher. Maybe a better written version of \"While max_replication_slots means different things on the publisher and subscriber, all other parameters are relevant only on either the publisher or the subscriber.\"> + <sect2 id=\"logical-replication-config-notes\">> + <title>Notes</title>I don't think we need this sub-section. If I understand correctly, these parameters are effective only on the subscriber side. So, any reason to not include them in that section?> +> + <para>> + Logical replication workers are also affected by> + <link linkend=\"guc-wal-receiver-timeout\"><varname>wal_receiver_timeout</varname></link>,> + <link linkend=\"guc-wal-receiver-status-interval\"><varname>wal_receiver_status_interval</varname></link> and> + <link linkend=\"guc-wal-retrieve-retry-interval\"><varname>wal_receiver_retry_interval</varname></link>.> + </para>> +I like moving this; it makes more sense here. Should we remove it from config.sgml? It seems a bit out of place there as we generally talk only about individual parameters there and this line is general logical replication subscriber advise which is more suited to logical-replication.sgml> + <para>> + Configuration parameter> + <link linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>> + may need to be adjusted to accommodate for replication workers, at least (> + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>> + + <literal>1</literal>). Some extensions and parallel queries also take> + worker slots from <varname>max_worker_processes</varname>.> + </para>> +> + </sect2>I think we should move this to the subscriber section as said above. It's useful to know this and people might skip over the notes.\n\n~~\n\nMeanwhile, I also suspect that the main blurb top of [1] is not\nentirely correct... it says \"These settings control the behaviour of\nthe built-in streaming replication feature\", although some of the GUCs\nmentioned later in this section are clearly for \"logical replication\".\nThoughts?I shared an idea above.Regards,Samay \n\n------\n[1] 31.10 Configuration Settings -\nhttps://www.postgresql.org/docs/current/logical-replication-config.html\n[2] 20.6 Replication -\nhttps://www.postgresql.org/docs/current/runtime-config-replication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 5 Dec 2022 10:56:50 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 5:57 AM samay sharma <smilingsamay@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Oct 24, 2022 at 12:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> Hi hackers.\n>>\n>> There is a docs Logical Replication section \"31.10 Configuration\n>> Settings\" [1] which describes some logical replication GUCs, and\n>> details on how they interact with each other and how to take that into\n>> account when setting their values.\n>>\n>> There is another docs Server Configuration section \"20.6 Replication\"\n>> [2] which lists the replication-related GUC parameters, and what they\n>> are for.\n>>\n>> Currently AFAIK those two pages are unconnected, but I felt it might\n>> be helpful if some of the parameters in the list [2] had xref links to\n>> the additional logical replication configuration information [1]. PSA\n>> a patch to do that.\n>\n>\n> +1 on the patch. Some feedback on v5 below.\n>\n\nThanks for your detailed review comments!\n\nI have changed most things according to your suggestions. Please check patch v6.\n\n> > + <para>\n> > + For <firstterm>logical replication</firstterm> configuration settings refer\n> > + also to <xref linkend=\"logical-replication-config\"/>.\n> > + </para>\n> > +\n>\n> I feel the top paragraph needs to explain terminology for logical replication like it does for physical replication in addition to linking to the logical replication config page. I'm recommending this as we use terms like subscriber etc. in description of parameters without introducing them first.\n>\n> As an example, something like below might work.\n>\n> These settings control the behavior of the built-in streaming replication feature (see Section 27.2.5) and logical replication (link).\n>\n> For physical replication, servers will be either a primary or a standby server. Primaries can send data, while standbys are always receivers of replicated data. When cascading replication (see Section 27.2.7) is used, standby servers can also be senders, as well as receivers. Parameters are mainly for sending and standby servers, though some parameters have meaning only on the primary server. Settings may vary across the cluster without problems if that is required.\n>\n> For logical replication, servers will either be publishers (also called senders in the sections below) or subscribers. Publishers are ...., Subscribers are...\n>\n\nOK. I split this blurb into 2 parts – streaming and logical\nreplication. The streaming replication part is the same as before. The\nlogical replication part is new.\n\n> > + <para>\n> > + See <xref linkend=\"logical-replication-config\"/> for more details\n> > + about setting <varname>max_replication_slots</varname> for logical\n> > + replication.\n> > + </para>\n>\n>\n> The link doesn't add any new information regarding max_replication_slots other than \"to reserve some for table sync\" and has a good amount of unrelated info. I think it might be useful to just put a line here asking to reserve some for table sync instead of linking to the entire logical replication config section.\n>\n\nOK. I copied the tablesync note back to config.sgml definition of\n'max_replication_slots' and removed the link as suggested. Frankly, I\nalso thought it is a bit strange that the max_replication_slots in the\n“Sending Servers” section was describing this parameter for\n“Subscribers”. OTOH, I did not want to split the definition in half so\ninstead, I’ve added another Subscriber <varlistentry> that just refers\nback to this place. It looks like an improvement to me.\n\n> > - Logical replication requires several configuration options to be set.\n> > + Logical replication requires several configuration parameters to be set.\n>\n> May not be needed? The docs have references to both options and parameters but I don't feel strongly about it. Feel free to use what you prefer.\n\nOK. I removed this.\n\n>\n> I think we should add an additional line to the intro here saying that parameters are mostly relevant only one of the subscriber or publisher. Maybe a better written version of \"While max_replication_slots means different things on the publisher and subscriber, all other parameters are relevant only on either the publisher or the subscriber.\"\n>\n\nOK. Done but with slightly different wording to that.\n\n> > + <sect2 id=\"logical-replication-config-notes\">\n> > + <title>Notes</title>\n>\n> I don't think we need this sub-section. If I understand correctly, these parameters are effective only on the subscriber side. So, any reason to not include them in that section?\n\nOK. I moved these notes into the \"Subscribers\" section as suggested,\nand removed \"Notes\".\n\n>\n> > +\n> > + <para>\n> > + Logical replication workers are also affected by\n> > + <link linkend=\"guc-wal-receiver-timeout\"><varname>wal_receiver_timeout</varname></link>,\n> > + <link linkend=\"guc-wal-receiver-status-interval\"><varname>wal_receiver_status_interval</varname></link> and\n> > + <link linkend=\"guc-wal-retrieve-retry-interval\"><varname>wal_receiver_retry_interval</varname></link>.\n> > + </para>\n> > +\n>\n> I like moving this; it makes more sense here. Should we remove it from config.sgml? It seems a bit out of place there as we generally talk only about individual parameters there and this line is general logical replication subscriber advise which is more suited to logical-replication.sgml\n\nOK. I agree, it looked repetitive since the link to the\nlogical-replication page is nearby this information anyway, so I’ve\nremoved it from the config.sgml as you suggested.\n\n>\n> > + <para>\n> > + Configuration parameter\n> > + <link linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> > + may need to be adjusted to accommodate for replication workers, at least (\n> > + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > + + <literal>1</literal>). Some extensions and parallel queries also take\n> > + worker slots from <varname>max_worker_processes</varname>.\n> > + </para>\n> > +\n> > + </sect2>\n>\n> I think we should move this to the subscriber section as said above. It's useful to know this and people might skip over the notes.\n\nOK. Done.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 7 Dec 2022 18:12:36 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "Hi,\n\nOn Tue, Dec 6, 2022 at 11:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> On Tue, Dec 6, 2022 at 5:57 AM samay sharma <smilingsamay@gmail.com>\n> wrote:\n> >\n> > Hi,\n> >\n> > On Mon, Oct 24, 2022 at 12:45 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> >>\n> >> Hi hackers.\n> >>\n> >> There is a docs Logical Replication section \"31.10 Configuration\n> >> Settings\" [1] which describes some logical replication GUCs, and\n> >> details on how they interact with each other and how to take that into\n> >> account when setting their values.\n> >>\n> >> There is another docs Server Configuration section \"20.6 Replication\"\n> >> [2] which lists the replication-related GUC parameters, and what they\n> >> are for.\n> >>\n> >> Currently AFAIK those two pages are unconnected, but I felt it might\n> >> be helpful if some of the parameters in the list [2] had xref links to\n> >> the additional logical replication configuration information [1]. PSA\n> >> a patch to do that.\n> >\n> >\n> > +1 on the patch. Some feedback on v5 below.\n> >\n>\n> Thanks for your detailed review comments!\n>\n> I have changed most things according to your suggestions. Please check\n> patch v6.\n>\n\nThanks for the changes. See a few points of feedback below.\n\n> + <para>\n> + For <emphasis>logical replication</emphasis>,\n<firstterm>publishers</firstterm>\n> + (servers that do <link\nlinkend=\"sql-createpublication\"><command>CREATE\nPUBLICATION</command></link>)\n> + replicate data to <firstterm>subscribers</firstterm>\n> + (servers that do <link\nlinkend=\"sql-createsubscription\"><command>CREATE\nSUBSCRIPTION</command></link>).\n> + Servers can also be publishers and subscribers at the same time.\nNote,\n> + the following sections refers to publishers as \"senders\". The\nparameter\n> + <literal>max_replication_slots</literal> has a different meaning\nfor the\n> + publisher and subscriber, but all other parameters are relevant\nonly to\n> + one side of the replication. For more details about logical\nreplication\n> + configuration settings refer to\n> + <xref linkend=\"logical-replication-config\"/>.\n> + </para>\n\nThe second last line seems a bit odd here. In my last round of feedback, I\nhad meant to add the line \"The parameter .... \" onwards to the top of\nlogical-replication-config.sgml.\n\nWhat if we made the top of logical-replication-config.sgml like below?\n\nLogical replication requires several configuration options to be set. Most\nconfiguration options are relevant only on one side of the replication\n(i.e. publisher or subscriber). However, max_replication_slots is\napplicable on both sides but has different meanings on each side.\n\n>\n> > > + <para>\n> > > + For <firstterm>logical replication</firstterm> configuration\n> settings refer\n> > > + also to <xref linkend=\"logical-replication-config\"/>.\n> > > + </para>\n> > > +\n> >\n> > I feel the top paragraph needs to explain terminology for logical\n> replication like it does for physical replication in addition to linking to\n> the logical replication config page. I'm recommending this as we use terms\n> like subscriber etc. in description of parameters without introducing them\n> first.\n> >\n> > As an example, something like below might work.\n> >\n> > These settings control the behavior of the built-in streaming\n> replication feature (see Section 27.2.5) and logical replication (link).\n> >\n> > For physical replication, servers will be either a primary or a standby\n> server. Primaries can send data, while standbys are always receivers of\n> replicated data. When cascading replication (see Section 27.2.7) is used,\n> standby servers can also be senders, as well as receivers. Parameters are\n> mainly for sending and standby servers, though some parameters have meaning\n> only on the primary server. Settings may vary across the cluster without\n> problems if that is required.\n> >\n> > For logical replication, servers will either be publishers (also called\n> senders in the sections below) or subscribers. Publishers are ....,\n> Subscribers are...\n> >\n>\n> OK. I split this blurb into 2 parts – streaming and logical\n> replication. The streaming replication part is the same as before. The\n> logical replication part is new.\n>\n> > > + <para>\n> > > + See <xref linkend=\"logical-replication-config\"/> for more\n> details\n> > > + about setting <varname>max_replication_slots</varname> for\n> logical\n> > > + replication.\n> > > + </para>\n> >\n> >\n> > The link doesn't add any new information regarding max_replication_slots\n> other than \"to reserve some for table sync\" and has a good amount of\n> unrelated info. I think it might be useful to just put a line here asking\n> to reserve some for table sync instead of linking to the entire logical\n> replication config section.\n> >\n>\n> OK. I copied the tablesync note back to config.sgml definition of\n> 'max_replication_slots' and removed the link as suggested. Frankly, I\n> also thought it is a bit strange that the max_replication_slots in the\n> “Sending Servers” section was describing this parameter for\n> “Subscribers”. OTOH, I did not want to split the definition in half so\n> instead, I’ve added another Subscriber <varlistentry> that just refers\n> back to this place. It looks like an improvement to me.\n>\n\nHmm, I agree this is a tricky scenario. However, to me, it seems odd to\nmention the parameter twice as this chapter of the docs just lists each\nparameter and describes them. So, I'd probably remove the reference to it\nin the subscriber section. We should describe it's usage in different\nplaces in the logical replication part of the docs (as we do).\n\n\n>\n> > > - Logical replication requires several configuration options to be\n> set.\n> > > + Logical replication requires several configuration parameters to\n> be set.\n> >\n> > May not be needed? The docs have references to both options and\n> parameters but I don't feel strongly about it. Feel free to use what you\n> prefer.\n>\n> OK. I removed this.\n>\n> >\n> > I think we should add an additional line to the intro here saying that\n> parameters are mostly relevant only one of the subscriber or publisher.\n> Maybe a better written version of \"While max_replication_slots means\n> different things on the publisher and subscriber, all other parameters are\n> relevant only on either the publisher or the subscriber.\"\n> >\n>\n> OK. Done but with slightly different wording to that.\n>\n> > > + <sect2 id=\"logical-replication-config-notes\">\n> > > + <title>Notes</title>\n> >\n> > I don't think we need this sub-section. If I understand correctly, these\n> parameters are effective only on the subscriber side. So, any reason to not\n> include them in that section?\n>\n> OK. I moved these notes into the \"Subscribers\" section as suggested,\n> and removed \"Notes\".\n>\n> >\n> > > +\n> > > + <para>\n> > > + Logical replication workers are also affected by\n> > > + <link\n> linkend=\"guc-wal-receiver-timeout\"><varname>wal_receiver_timeout</varname></link>,\n> > > + <link\n> linkend=\"guc-wal-receiver-status-interval\"><varname>wal_receiver_status_interval</varname></link>\n> and\n> > > + <link\n> linkend=\"guc-wal-retrieve-retry-interval\"><varname>wal_receiver_retry_interval</varname></link>.\n> > > + </para>\n> > > +\n> >\n> > I like moving this; it makes more sense here. Should we remove it from\n> config.sgml? It seems a bit out of place there as we generally talk only\n> about individual parameters there and this line is general logical\n> replication subscriber advise which is more suited to\n> logical-replication.sgml\n>\n> OK. I agree, it looked repetitive since the link to the\n> logical-replication page is nearby this information anyway, so I’ve\n> removed it from the config.sgml as you suggested.\n>\n> >\n> > > + <para>\n> > > + Configuration parameter\n> > > + <link\n> linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> > > + may need to be adjusted to accommodate for replication workers,\n> at least (\n> > > + <link\n> linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > > + + <literal>1</literal>). Some extensions and parallel queries\n> also take\n> > > + worker slots from <varname>max_worker_processes</varname>.\n> > > + </para>\n> > > +\n> > > + </sect2>\n> >\n> > I think we should move this to the subscriber section as said above.\n> It's useful to know this and people might skip over the notes.\n>\n> OK. Done.\n>\n\n> + <para>\n> + <link\nlinkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> + must be set to at least the number of subscriptions (for apply\nworkers), plus\n> + some reserve for the table synchronization workers. Configuration\nparameter\n> + <link\nlinkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> + may need to be adjusted to accommodate for replication workers, at\nleast (\n> + <link\nlinkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> + + <literal>1</literal>). Note, some extensions and parallel queries\nalso\n> + take worker slots from <varname>max_worker_processes</varname>.\n> + </para>\n\nMaybe do max_worker_processes in a new line like the rest.\n\nRegards,\nSamay\nMicrosoft\n\n>\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n\nHi,On Tue, Dec 6, 2022 at 11:12 PM Peter Smith <smithpb2250@gmail.com> wrote:On Tue, Dec 6, 2022 at 5:57 AM samay sharma <smilingsamay@gmail.com> wrote:\n>\n> Hi,\n>\n> On Mon, Oct 24, 2022 at 12:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> Hi hackers.\n>>\n>> There is a docs Logical Replication section \"31.10 Configuration\n>> Settings\" [1] which describes some logical replication GUCs, and\n>> details on how they interact with each other and how to take that into\n>> account when setting their values.\n>>\n>> There is another docs Server Configuration section \"20.6 Replication\"\n>> [2] which lists the replication-related GUC parameters, and what they\n>> are for.\n>>\n>> Currently AFAIK those two pages are unconnected, but I felt it might\n>> be helpful if some of the parameters in the list [2] had xref links to\n>> the additional logical replication configuration information [1]. PSA\n>> a patch to do that.\n>\n>\n> +1 on the patch. Some feedback on v5 below.\n>\n\nThanks for your detailed review comments!\n\nI have changed most things according to your suggestions. Please check patch v6.Thanks for the changes. See a few points of feedback below.> + <para>> + For <emphasis>logical replication</emphasis>, <firstterm>publishers</firstterm>> + (servers that do <link linkend=\"sql-createpublication\"><command>CREATE PUBLICATION</command></link>)> + replicate data to <firstterm>subscribers</firstterm>> + (servers that do <link linkend=\"sql-createsubscription\"><command>CREATE SUBSCRIPTION</command></link>).> + Servers can also be publishers and subscribers at the same time. Note,> + the following sections refers to publishers as \"senders\". The parameter> + <literal>max_replication_slots</literal> has a different meaning for the> + publisher and subscriber, but all other parameters are relevant only to> + one side of the replication. For more details about logical replication> + configuration settings refer to> + <xref linkend=\"logical-replication-config\"/>.> + </para>The second last line seems a bit odd here. In my last round of feedback, I had meant to add the line \"The parameter .... \" onwards to the top of logical-replication-config.sgml.What if we made the top of logical-replication-config.sgml like below?Logical replication requires several configuration options to be set. Most configuration options are relevant only on one side of the replication (i.e. publisher or subscriber). However, max_replication_slots is applicable on both sides but has different meanings on each side.\n\n> > + <para>\n> > + For <firstterm>logical replication</firstterm> configuration settings refer\n> > + also to <xref linkend=\"logical-replication-config\"/>.\n> > + </para>\n> > +\n>\n> I feel the top paragraph needs to explain terminology for logical replication like it does for physical replication in addition to linking to the logical replication config page. I'm recommending this as we use terms like subscriber etc. in description of parameters without introducing them first.\n>\n> As an example, something like below might work.\n>\n> These settings control the behavior of the built-in streaming replication feature (see Section 27.2.5) and logical replication (link).\n>\n> For physical replication, servers will be either a primary or a standby server. Primaries can send data, while standbys are always receivers of replicated data. When cascading replication (see Section 27.2.7) is used, standby servers can also be senders, as well as receivers. Parameters are mainly for sending and standby servers, though some parameters have meaning only on the primary server. Settings may vary across the cluster without problems if that is required.\n>\n> For logical replication, servers will either be publishers (also called senders in the sections below) or subscribers. Publishers are ...., Subscribers are...\n>\n\nOK. I split this blurb into 2 parts – streaming and logical\nreplication. The streaming replication part is the same as before. The\nlogical replication part is new.\n\n> > + <para>\n> > + See <xref linkend=\"logical-replication-config\"/> for more details\n> > + about setting <varname>max_replication_slots</varname> for logical\n> > + replication.\n> > + </para>\n>\n>\n> The link doesn't add any new information regarding max_replication_slots other than \"to reserve some for table sync\" and has a good amount of unrelated info. I think it might be useful to just put a line here asking to reserve some for table sync instead of linking to the entire logical replication config section.\n>\n\nOK. I copied the tablesync note back to config.sgml definition of\n'max_replication_slots' and removed the link as suggested. Frankly, I\nalso thought it is a bit strange that the max_replication_slots in the\n“Sending Servers” section was describing this parameter for\n“Subscribers”. OTOH, I did not want to split the definition in half so\ninstead, I’ve added another Subscriber <varlistentry> that just refers\nback to this place. It looks like an improvement to me.Hmm, I agree this is a tricky scenario. However, to me, it seems odd to mention the parameter twice as this chapter of the docs just lists each parameter and describes them. So, I'd probably remove the reference to it in the subscriber section. We should describe it's usage in different places in the logical replication part of the docs (as we do). \n\n> > - Logical replication requires several configuration options to be set.\n> > + Logical replication requires several configuration parameters to be set.\n>\n> May not be needed? The docs have references to both options and parameters but I don't feel strongly about it. Feel free to use what you prefer.\n\nOK. I removed this.\n\n>\n> I think we should add an additional line to the intro here saying that parameters are mostly relevant only one of the subscriber or publisher. Maybe a better written version of \"While max_replication_slots means different things on the publisher and subscriber, all other parameters are relevant only on either the publisher or the subscriber.\"\n>\n\nOK. Done but with slightly different wording to that.\n\n> > + <sect2 id=\"logical-replication-config-notes\">\n> > + <title>Notes</title>\n>\n> I don't think we need this sub-section. If I understand correctly, these parameters are effective only on the subscriber side. So, any reason to not include them in that section?\n\nOK. I moved these notes into the \"Subscribers\" section as suggested,\nand removed \"Notes\".\n\n>\n> > +\n> > + <para>\n> > + Logical replication workers are also affected by\n> > + <link linkend=\"guc-wal-receiver-timeout\"><varname>wal_receiver_timeout</varname></link>,\n> > + <link linkend=\"guc-wal-receiver-status-interval\"><varname>wal_receiver_status_interval</varname></link> and\n> > + <link linkend=\"guc-wal-retrieve-retry-interval\"><varname>wal_receiver_retry_interval</varname></link>.\n> > + </para>\n> > +\n>\n> I like moving this; it makes more sense here. Should we remove it from config.sgml? It seems a bit out of place there as we generally talk only about individual parameters there and this line is general logical replication subscriber advise which is more suited to logical-replication.sgml\n\nOK. I agree, it looked repetitive since the link to the\nlogical-replication page is nearby this information anyway, so I’ve\nremoved it from the config.sgml as you suggested.\n\n>\n> > + <para>\n> > + Configuration parameter\n> > + <link linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> > + may need to be adjusted to accommodate for replication workers, at least (\n> > + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > + + <literal>1</literal>). Some extensions and parallel queries also take\n> > + worker slots from <varname>max_worker_processes</varname>.\n> > + </para>\n> > +\n> > + </sect2>\n>\n> I think we should move this to the subscriber section as said above. It's useful to know this and people might skip over the notes.\n\nOK. Done.> + <para>> + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>> + must be set to at least the number of subscriptions (for apply workers), plus> + some reserve for the table synchronization workers. Configuration parameter> + <link linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>> + may need to be adjusted to accommodate for replication workers, at least (> + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>> + + <literal>1</literal>). Note, some extensions and parallel queries also> + take worker slots from <varname>max_worker_processes</varname>.> + </para>Maybe do max_worker_processes in a new line like the rest.Regards,SamayMicrosoft \n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 7 Dec 2022 15:49:19 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 10:49 AM samay sharma <smilingsamay@gmail.com> wrote:\n>\n...\n\n> Thanks for the changes. See a few points of feedback below.\n>\n\nPatch v7 addresses this feedback. PSA.\n\n> > + <para>\n> > + For <emphasis>logical replication</emphasis>, <firstterm>publishers</firstterm>\n> > + (servers that do <link linkend=\"sql-createpublication\"><command>CREATE PUBLICATION</command></link>)\n> > + replicate data to <firstterm>subscribers</firstterm>\n> > + (servers that do <link linkend=\"sql-createsubscription\"><command>CREATE SUBSCRIPTION</command></link>).\n> > + Servers can also be publishers and subscribers at the same time. Note,\n> > + the following sections refers to publishers as \"senders\". The parameter\n> > + <literal>max_replication_slots</literal> has a different meaning for the\n> > + publisher and subscriber, but all other parameters are relevant only to\n> > + one side of the replication. For more details about logical replication\n> > + configuration settings refer to\n> > + <xref linkend=\"logical-replication-config\"/>.\n> > + </para>\n>\n> The second last line seems a bit odd here. In my last round of feedback, I had meant to add the line \"The parameter .... \" onwards to the top of logical-replication-config.sgml.\n>\n> What if we made the top of logical-replication-config.sgml like below?\n>\n> Logical replication requires several configuration options to be set. Most configuration options are relevant only on one side of the replication (i.e. publisher or subscriber). However, max_replication_slots is applicable on both sides but has different meanings on each side.\n\n\nOK. Moving this note is not quite following the same pattern as the\n\"streaming replication\" intro blurb, but anyway it looks fine when\nmoved, so I've done as suggested.\n\n\n>> OK. I copied the tablesync note back to config.sgml definition of\n>> 'max_replication_slots' and removed the link as suggested. Frankly, I\n>> also thought it is a bit strange that the max_replication_slots in the\n>> “Sending Servers” section was describing this parameter for\n>> “Subscribers”. OTOH, I did not want to split the definition in half so\n>> instead, I’ve added another Subscriber <varlistentry> that just refers\n>> back to this place. It looks like an improvement to me.\n>\n>\n> Hmm, I agree this is a tricky scenario. However, to me, it seems odd to mention the parameter twice as this chapter of the docs just lists each parameter and describes them. So, I'd probably remove the reference to it in the subscriber section. We should describe it's usage in different places in the logical replication part of the docs (as we do).\n\nThe 'max_replication_slots' is problematic because it is almost like\nhaving 2 different GUCs that happen to have the same name. So I\npreferred it also gets a mention in the “Subscriber” section to make\nit obvious that it wears 2 hats, but IIUC you prefer that 2nd mention\nis not present because typically each GUC should appear once only in\nthis chapter. TBH, I think both ways could be successfully argued for\nor against -- so I’m just going to leave this as-is for now and let\nthe committer decide.\n\n> > + <para>\n> > + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > + must be set to at least the number of subscriptions (for apply workers), plus\n> > + some reserve for the table synchronization workers. Configuration parameter\n> > + <link linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> > + may need to be adjusted to accommodate for replication workers, at least (\n> > + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > + + <literal>1</literal>). Note, some extensions and parallel queries also\n> > + take worker slots from <varname>max_worker_processes</varname>.\n> > + </para>\n>\n> Maybe do max_worker_processes in a new line like the rest.\n\nOK. Done as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 8 Dec 2022 16:20:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "Hi,\n\nOn Wed, Dec 7, 2022 at 9:20 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> On Thu, Dec 8, 2022 at 10:49 AM samay sharma <smilingsamay@gmail.com>\n> wrote:\n> >\n> ...\n>\n> > Thanks for the changes. See a few points of feedback below.\n> >\n>\n> Patch v7 addresses this feedback. PSA.\n>\n> > > + <para>\n> > > + For <emphasis>logical replication</emphasis>,\n> <firstterm>publishers</firstterm>\n> > > + (servers that do <link\n> linkend=\"sql-createpublication\"><command>CREATE\n> PUBLICATION</command></link>)\n> > > + replicate data to <firstterm>subscribers</firstterm>\n> > > + (servers that do <link\n> linkend=\"sql-createsubscription\"><command>CREATE\n> SUBSCRIPTION</command></link>).\n> > > + Servers can also be publishers and subscribers at the same time.\n> Note,\n> > > + the following sections refers to publishers as \"senders\". The\n> parameter\n> > > + <literal>max_replication_slots</literal> has a different meaning\n> for the\n> > > + publisher and subscriber, but all other parameters are relevant\n> only to\n> > > + one side of the replication. For more details about logical\n> replication\n> > > + configuration settings refer to\n> > > + <xref linkend=\"logical-replication-config\"/>.\n> > > + </para>\n> >\n> > The second last line seems a bit odd here. In my last round of feedback,\n> I had meant to add the line \"The parameter .... \" onwards to the top of\n> logical-replication-config.sgml.\n> >\n> > What if we made the top of logical-replication-config.sgml like below?\n> >\n> > Logical replication requires several configuration options to be set.\n> Most configuration options are relevant only on one side of the replication\n> (i.e. publisher or subscriber). However, max_replication_slots is\n> applicable on both sides but has different meanings on each side.\n>\n>\n> OK. Moving this note is not quite following the same pattern as the\n> \"streaming replication\" intro blurb, but anyway it looks fine when\n> moved, so I've done as suggested.\n>\n>\n> >> OK. I copied the tablesync note back to config.sgml definition of\n> >> 'max_replication_slots' and removed the link as suggested. Frankly, I\n> >> also thought it is a bit strange that the max_replication_slots in the\n> >> “Sending Servers” section was describing this parameter for\n> >> “Subscribers”. OTOH, I did not want to split the definition in half so\n> >> instead, I’ve added another Subscriber <varlistentry> that just refers\n> >> back to this place. It looks like an improvement to me.\n> >\n> >\n> > Hmm, I agree this is a tricky scenario. However, to me, it seems odd to\n> mention the parameter twice as this chapter of the docs just lists each\n> parameter and describes them. So, I'd probably remove the reference to it\n> in the subscriber section. We should describe it's usage in different\n> places in the logical replication part of the docs (as we do).\n>\n> The 'max_replication_slots' is problematic because it is almost like\n> having 2 different GUCs that happen to have the same name. So I\n> preferred it also gets a mention in the “Subscriber” section to make\n> it obvious that it wears 2 hats, but IIUC you prefer that 2nd mention\n> is not present because typically each GUC should appear once only in\n> this chapter. TBH, I think both ways could be successfully argued for\n> or against -- so I’m just going to leave this as-is for now and let\n> the committer decide.\n>\n\nSounds fair.\n\nI don't have any other feedback. This looks good to me.\n\nAlso, I don't see this patch in the 2023/01 commitfest. Might be worth\nmoving to that one.\n\nRegards,\nSamay\nMicrosoft\n\n\n>\n> > > + <para>\n> > > + <link\n> linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > > + must be set to at least the number of subscriptions (for apply\n> workers), plus\n> > > + some reserve for the table synchronization workers. Configuration\n> parameter\n> > > + <link\n> linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> > > + may need to be adjusted to accommodate for replication workers,\n> at least (\n> > > + <link\n> linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > > + + <literal>1</literal>). Note, some extensions and parallel\n> queries also\n> > > + take worker slots from <varname>max_worker_processes</varname>.\n> > > + </para>\n> >\n> > Maybe do max_worker_processes in a new line like the rest.\n>\n> OK. Done as suggested.\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n\nHi,On Wed, Dec 7, 2022 at 9:20 PM Peter Smith <smithpb2250@gmail.com> wrote:On Thu, Dec 8, 2022 at 10:49 AM samay sharma <smilingsamay@gmail.com> wrote:\n>\n...\n\n> Thanks for the changes. See a few points of feedback below.\n>\n\nPatch v7 addresses this feedback. PSA.\n\n> > + <para>\n> > + For <emphasis>logical replication</emphasis>, <firstterm>publishers</firstterm>\n> > + (servers that do <link linkend=\"sql-createpublication\"><command>CREATE PUBLICATION</command></link>)\n> > + replicate data to <firstterm>subscribers</firstterm>\n> > + (servers that do <link linkend=\"sql-createsubscription\"><command>CREATE SUBSCRIPTION</command></link>).\n> > + Servers can also be publishers and subscribers at the same time. Note,\n> > + the following sections refers to publishers as \"senders\". The parameter\n> > + <literal>max_replication_slots</literal> has a different meaning for the\n> > + publisher and subscriber, but all other parameters are relevant only to\n> > + one side of the replication. For more details about logical replication\n> > + configuration settings refer to\n> > + <xref linkend=\"logical-replication-config\"/>.\n> > + </para>\n>\n> The second last line seems a bit odd here. In my last round of feedback, I had meant to add the line \"The parameter .... \" onwards to the top of logical-replication-config.sgml.\n>\n> What if we made the top of logical-replication-config.sgml like below?\n>\n> Logical replication requires several configuration options to be set. Most configuration options are relevant only on one side of the replication (i.e. publisher or subscriber). However, max_replication_slots is applicable on both sides but has different meanings on each side.\n\n\nOK. Moving this note is not quite following the same pattern as the\n\"streaming replication\" intro blurb, but anyway it looks fine when\nmoved, so I've done as suggested.\n\n\n>> OK. I copied the tablesync note back to config.sgml definition of\n>> 'max_replication_slots' and removed the link as suggested. Frankly, I\n>> also thought it is a bit strange that the max_replication_slots in the\n>> “Sending Servers” section was describing this parameter for\n>> “Subscribers”. OTOH, I did not want to split the definition in half so\n>> instead, I’ve added another Subscriber <varlistentry> that just refers\n>> back to this place. It looks like an improvement to me.\n>\n>\n> Hmm, I agree this is a tricky scenario. However, to me, it seems odd to mention the parameter twice as this chapter of the docs just lists each parameter and describes them. So, I'd probably remove the reference to it in the subscriber section. We should describe it's usage in different places in the logical replication part of the docs (as we do).\n\nThe 'max_replication_slots' is problematic because it is almost like\nhaving 2 different GUCs that happen to have the same name. So I\npreferred it also gets a mention in the “Subscriber” section to make\nit obvious that it wears 2 hats, but IIUC you prefer that 2nd mention\nis not present because typically each GUC should appear once only in\nthis chapter. TBH, I think both ways could be successfully argued for\nor against -- so I’m just going to leave this as-is for now and let\nthe committer decide.Sounds fair.I don't have any other feedback. This looks good to me.Also, I don't see this patch in the 2023/01 commitfest. Might be worth moving to that one.Regards,SamayMicrosoft \n\n> > + <para>\n> > + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > + must be set to at least the number of subscriptions (for apply workers), plus\n> > + some reserve for the table synchronization workers. Configuration parameter\n> > + <link linkend=\"guc-max-worker-processes\"><varname>max_worker_processes</varname></link>\n> > + may need to be adjusted to accommodate for replication workers, at least (\n> > + <link linkend=\"guc-max-logical-replication-workers\"><varname>max_logical_replication_workers</varname></link>\n> > + + <literal>1</literal>). Note, some extensions and parallel queries also\n> > + take worker slots from <varname>max_worker_processes</varname>.\n> > + </para>\n>\n> Maybe do max_worker_processes in a new line like the rest.\n\nOK. Done as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 9 Dec 2022 10:10:21 -0800",
"msg_from": "samay sharma <smilingsamay@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 5:10 AM samay sharma <smilingsamay@gmail.com> wrote:\n>\n\n>\n> I don't have any other feedback. This looks good to me.\n>\n> Also, I don't see this patch in the 2023/01 commitfest. Might be worth moving to that one.\n>\n\nHmm, it was already recorded in the 2022-11 commitfest [1], so I\nassumed it would just carry forward to the next one.\n\nAnyway, I've added it again to 2023-01 commitfest [2]. Thanks for telling me.\n\n------\n[1] 2022-11 CF - https://commitfest.postgresql.org/40/3959/\n[2] 2023-01 CF - https://commitfest.postgresql.org/41/4061/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 12 Dec 2022 09:46:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Sat, Dec 10, 2022 at 5:10 AM samay sharma <smilingsamay@gmail.com> wrote:\n>> Also, I don't see this patch in the 2023/01 commitfest. Might be worth moving to that one.\n\n> Hmm, it was already recorded in the 2022-11 commitfest [1], so I\n> assumed it would just carry forward to the next one.\n\nIan is still working on closing out the November 'fest :-(.\nI suspect that in a day or so that one will get moved, and\nyou will have duplicate entries in the January 'fest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Dec 2022 17:54:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On 2022-Dec-07, samay sharma wrote:\n\n> On Tue, Dec 6, 2022 at 11:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> > OK. I copied the tablesync note back to config.sgml definition of\n> > 'max_replication_slots' and removed the link as suggested. Frankly, I\n> > also thought it is a bit strange that the max_replication_slots in the\n> > “Sending Servers” section was describing this parameter for\n> > “Subscribers”. OTOH, I did not want to split the definition in half so\n> > instead, I’ve added another Subscriber <varlistentry> that just refers\n> > back to this place. It looks like an improvement to me.\n> \n> Hmm, I agree this is a tricky scenario. However, to me, it seems odd to\n> mention the parameter twice as this chapter of the docs just lists each\n> parameter and describes them. So, I'd probably remove the reference to it\n> in the subscriber section. We should describe it's usage in different\n> places in the logical replication part of the docs (as we do).\n\nI agree this is tricky. However, because they essentially have\ncompletely different behaviors on each side, and because we're\ndocumenting each side separately, to me it makes more sense to document\neach behavior separately, so I've split it. I also added mention at\neach side that the other one exists. My rationale is that a user is\nlikely going to search for stuff to set on one side first, then for\nstuff to set on the other side. So doing it this way maximizes\nhelpfulness (or so I hope anyway). I also added a separate index entry.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n",
"msg_date": "Mon, 12 Dec 2022 20:25:32 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On 2022-Dec-11, Tom Lane wrote:\n\n> Ian is still working on closing out the November 'fest :-(.\n> I suspect that in a day or so that one will get moved, and\n> you will have duplicate entries in the January 'fest.\n\nI've marked both as committed.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 20:43:35 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 6:25 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Dec-07, samay sharma wrote:\n>\n> > On Tue, Dec 6, 2022 at 11:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> > > OK. I copied the tablesync note back to config.sgml definition of\n> > > 'max_replication_slots' and removed the link as suggested. Frankly, I\n> > > also thought it is a bit strange that the max_replication_slots in the\n> > > “Sending Servers” section was describing this parameter for\n> > > “Subscribers”. OTOH, I did not want to split the definition in half so\n> > > instead, I’ve added another Subscriber <varlistentry> that just refers\n> > > back to this place. It looks like an improvement to me.\n> >\n> > Hmm, I agree this is a tricky scenario. However, to me, it seems odd to\n> > mention the parameter twice as this chapter of the docs just lists each\n> > parameter and describes them. So, I'd probably remove the reference to it\n> > in the subscriber section. We should describe it's usage in different\n> > places in the logical replication part of the docs (as we do).\n>\n> I agree this is tricky. However, because they essentially have\n> completely different behaviors on each side, and because we're\n> documenting each side separately, to me it makes more sense to document\n> each behavior separately, so I've split it. I also added mention at\n> each side that the other one exists. My rationale is that a user is\n> likely going to search for stuff to set on one side first, then for\n> stuff to set on the other side. So doing it this way maximizes\n> helpfulness (or so I hope anyway). I also added a separate index entry.\n>\n\nLGTM. Thank you for pushing this.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 13 Dec 2022 09:06:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Logical replication GUCs - added some xrefs"
}
] |
[
{
"msg_contents": "Avoid having to list all the possible object types twice. Instead, only \n_getObjectDescription() needs to know about specific object types. It \ncommunicates back to _printTocEntry() whether an owner is to be set.\n\nIn passing, remove the logic to use ALTER TABLE to set the owner of \nviews and sequences. This is no longer necessary. Furthermore, if \npg_dump doesn't recognize the object type, this is now a fatal error, \nnot a warning.",
"msg_date": "Mon, 24 Oct 2022 11:54:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_dump: Refactor code that constructs ALTER ... OWNER TO commands"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 5:54 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> Avoid having to list all the possible object types twice. Instead, only\n> _getObjectDescription() needs to know about specific object types. It\n> communicates back to _printTocEntry() whether an owner is to be set.\n>\n> In passing, remove the logic to use ALTER TABLE to set the owner of\n> views and sequences. This is no longer necessary. Furthermore, if\n> pg_dump doesn't recognize the object type, this is now a fatal error,\n> not a warning.\n\n\nMakes sense, passes all tests.\n\nIt's clearly out of scope for this very focused patch, but would it make\nsense for the TocEntry struct to be populated with an type enumeration\ninteger as well as the type string to make for clearer and faster sifting\nlater?\n\nOn Mon, Oct 24, 2022 at 5:54 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:Avoid having to list all the possible object types twice. Instead, only \n_getObjectDescription() needs to know about specific object types. It \ncommunicates back to _printTocEntry() whether an owner is to be set.\n\nIn passing, remove the logic to use ALTER TABLE to set the owner of \nviews and sequences. This is no longer necessary. Furthermore, if \npg_dump doesn't recognize the object type, this is now a fatal error, \nnot a warning.Makes sense, passes all tests.It's clearly out of scope for this very focused patch, but would it make sense for the TocEntry struct to be populated with an type enumeration integer as well as the type string to make for clearer and faster sifting later?",
"msg_date": "Tue, 1 Nov 2022 13:59:28 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Refactor code that constructs ALTER ... OWNER TO\n commands"
},
{
"msg_contents": "On 01.11.22 13:59, Corey Huinker wrote:\n> On Mon, Oct 24, 2022 at 5:54 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> Avoid having to list all the possible object types twice. Instead,\n> only\n> _getObjectDescription() needs to know about specific object types. It\n> communicates back to _printTocEntry() whether an owner is to be set.\n> \n> In passing, remove the logic to use ALTER TABLE to set the owner of\n> views and sequences. This is no longer necessary. Furthermore, if\n> pg_dump doesn't recognize the object type, this is now a fatal error,\n> not a warning.\n> \n> \n> Makes sense, passes all tests.\n\nCommitted.\n\n> It's clearly out of scope for this very focused patch, but would it make \n> sense for the TocEntry struct to be populated with an type enumeration \n> integer as well as the type string to make for clearer and faster \n> sifting later?\n\nThat could be better, but wouldn't that mean a change of the format of \npg_dump archives?\n\n\n",
"msg_date": "Wed, 2 Nov 2022 17:30:42 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump: Refactor code that constructs ALTER ... OWNER TO\n commands"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 5:30 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 01.11.22 13:59, Corey Huinker wrote:\n> > On Mon, Oct 24, 2022 at 5:54 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com\n> > <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> >\n> > Avoid having to list all the possible object types twice. Instead,\n> > only\n> > _getObjectDescription() needs to know about specific object types.\n> It\n> > communicates back to _printTocEntry() whether an owner is to be set.\n> >\n> > In passing, remove the logic to use ALTER TABLE to set the owner of\n> > views and sequences. This is no longer necessary. Furthermore, if\n> > pg_dump doesn't recognize the object type, this is now a fatal error,\n> > not a warning.\n> >\n> >\n> > Makes sense, passes all tests.\n>\n> Committed.\n>\n> > It's clearly out of scope for this very focused patch, but would it make\n> > sense for the TocEntry struct to be populated with an type enumeration\n> > integer as well as the type string to make for clearer and faster\n> > sifting later?\n>\n> That could be better, but wouldn't that mean a change of the format of\n> pg_dump archives?\n>\n\nSorry for the confusion, I was thinking strictly of the in memory\nrepresentation after it is extracted from the dictionary.\n\nOn Wed, Nov 2, 2022 at 5:30 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 01.11.22 13:59, Corey Huinker wrote:\n> On Mon, Oct 24, 2022 at 5:54 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> Avoid having to list all the possible object types twice. Instead,\n> only\n> _getObjectDescription() needs to know about specific object types. It\n> communicates back to _printTocEntry() whether an owner is to be set.\n> \n> In passing, remove the logic to use ALTER TABLE to set the owner of\n> views and sequences. This is no longer necessary. Furthermore, if\n> pg_dump doesn't recognize the object type, this is now a fatal error,\n> not a warning.\n> \n> \n> Makes sense, passes all tests.\n\nCommitted.\n\n> It's clearly out of scope for this very focused patch, but would it make \n> sense for the TocEntry struct to be populated with an type enumeration \n> integer as well as the type string to make for clearer and faster \n> sifting later?\n\nThat could be better, but wouldn't that mean a change of the format of \npg_dump archives?Sorry for the confusion, I was thinking strictly of the in memory representation after it is extracted from the dictionary.",
"msg_date": "Sat, 5 Nov 2022 13:39:09 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Refactor code that constructs ALTER ... OWNER TO\n commands"
}
] |
[
{
"msg_contents": "Hi\n\nRecently I have been working a lot with partitioned tables which contain a mix\nof local and foreign partitions, and find it would be very useful to be able to\neasily obtain an overview of which partitions are foreign and where they are\nlocated.\n\nCurrently, executing \"\\d+\" on a partitioned table lists the partitions\nlike this:\n\n postgres=# \\d+ parttest\n Partitioned table \"public.parttest\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n id | integer | | not null | | plain |\n | |\n val1 | text | | | | extended |\n | |\n val2 | text | | | | extended |\n | |\n Partition key: HASH (id)\n Partitions: parttest_10_0 FOR VALUES WITH (modulus 10, remainder 0),\n parttest_10_1 FOR VALUES WITH (modulus 10, remainder 1),\n parttest_10_2 FOR VALUES WITH (modulus 10, remainder 2),\n parttest_10_3 FOR VALUES WITH (modulus 10, remainder 3),\n parttest_10_4 FOR VALUES WITH (modulus 10, remainder 4),\n parttest_10_5 FOR VALUES WITH (modulus 10, remainder 5),\n parttest_10_6 FOR VALUES WITH (modulus 10, remainder 6),\n parttest_10_7 FOR VALUES WITH (modulus 10, remainder 7),\n parttest_10_8 FOR VALUES WITH (modulus 10, remainder 8),\n parttest_10_9 FOR VALUES WITH (modulus 10, remainder 9)\n\nwhich doesn't help much in that respect.\n\nAttached patch changes this output to:\n\n postgres=# \\d+ parttest\n Partitioned table \"public.parttest\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n id | integer | | not null | | plain |\n | |\n val1 | text | | | | extended |\n | |\n val2 | text | | | | extended |\n | |\n Partition key: HASH (id)\n Partitions: parttest_10_0 FOR VALUES WITH (modulus 10, remainder 0),\n parttest_10_1 FOR VALUES WITH (modulus 10, remainder\n1), server: \"fdw_node2\",\n parttest_10_2 FOR VALUES WITH (modulus 10, remainder 2),\n parttest_10_3 FOR VALUES WITH (modulus 10, remainder\n3), server: \"fdw_node2\",\n parttest_10_4 FOR VALUES WITH (modulus 10, remainder 4),\n parttest_10_5 FOR VALUES WITH (modulus 10, remainder\n5), server: \"fdw_node2\",\n parttest_10_6 FOR VALUES WITH (modulus 10, remainder 6),\n parttest_10_7 FOR VALUES WITH (modulus 10, remainder\n7), server: \"fdw_node2\",\n parttest_10_8 FOR VALUES WITH (modulus 10, remainder 8),\n parttest_10_9 FOR VALUES WITH (modulus 10, remainder\n9), server: \"fdw_node2\"\n\nwhich is much more informative, albeit a little more cluttered, but\nshort of using\nemojis I can't see any better way (suggestions welcome).\n\nFor completeness, output with child tables could look like this:\n\n postgres=# \\d+ inhtest\n Table \"public.inhtest\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n id | integer | | not null | | plain |\n | |\n val1 | text | | | | extended |\n | |\n val2 | text | | | | extended |\n | |\n Child tables: inhtest_10_0,\n inhtest_10_1 (server: \"fdw_node2\"),\n inhtest_10_2,\n inhtest_10_3 (server: \"fdw_node2\"),\n inhtest_10_4,\n inhtest_10_5 (server: \"fdw_node2\"),\n inhtest_10_6,\n inhtest_10_7 (server: \"fdw_node2\"),\n inhtest_10_8,\n inhtest_10_9 (server: \"fdw_node2\")\n Access method: heap\n\nWill add to next CF.\n\n\nRegards\n\nIan Barwick",
"msg_date": "Mon, 24 Oct 2022 21:44:18 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "[patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 09:44:18PM +0900, Ian Lawrence Barwick wrote:\n> Recently I have been working a lot with partitioned tables which contain a mix\n> of local and foreign partitions, and find it would be very useful to be able to\n> easily obtain an overview of which partitions are foreign and where they are\n> located.\n\n> Partitions: parttest_10_0 FOR VALUES WITH (modulus 10, remainder 0),\n> parttest_10_1 FOR VALUES WITH (modulus 10, remainder 1), server: \"fdw_node2\",\n\n> which is much more informative, albeit a little more cluttered, but\n\n> @@ -3445,6 +3451,10 @@ describeOneTableDetails(const char *schemaname,\n> \t\t\t\tif (child_relkind == RELKIND_PARTITIONED_TABLE ||\n> \t\t\t\t\tchild_relkind == RELKIND_PARTITIONED_INDEX)\n> \t\t\t\t\tappendPQExpBufferStr(&buf, \", PARTITIONED\");\n> +\t\t\t\telse if (child_relkind == RELKIND_FOREIGN_TABLE && is_partitioned)\n> +\t\t\t\t\tappendPQExpBuffer(&buf, \", server: \\\"%s\\\"\", PQgetvalue(result, i, 4));\n> +\t\t\t\telse if (child_relkind == RELKIND_FOREIGN_TABLE && !is_partitioned)\n> +\t\t\t\t\tappendPQExpBuffer(&buf, \" (server: \\\"%s\\\")\", PQgetvalue(result, i, 4));\n> \t\t\t\tif (strcmp(PQgetvalue(result, i, 2), \"t\") == 0)\n> \t\t\t\t\tappendPQExpBufferStr(&buf, \" (DETACH PENDING)\");\n> \t\t\t\tif (i < tuples - 1)\n\nTo avoid the clutter that you mentioned, I suggest that this should show\nthat the table *is* foreign, but without the server - if you want to\nknow the server (or its options), you can run another \\d command for\nthat (or run a SQL query).\n\nThat's similar to what's shown if the child is partitioned: a suffix\nlike \", PARTITIONED\", but without show the partition strategy.\n\nI had a patch to allow \\d++, and maybe showing the foreign server would\nbe reasonable for that. But the patch got closed, evidently lack of\ninterest.\n\n\n",
"msg_date": "Mon, 24 Oct 2022 08:03:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "On 2022-Oct-24, Justin Pryzby wrote:\n\n> On Mon, Oct 24, 2022 at 09:44:18PM +0900, Ian Lawrence Barwick wrote:\n\n> > +\t\t\t\telse if (child_relkind == RELKIND_FOREIGN_TABLE && is_partitioned)\n> > +\t\t\t\t\tappendPQExpBuffer(&buf, \", server: \\\"%s\\\"\", PQgetvalue(result, i, 4));\n\n> To avoid the clutter that you mentioned, I suggest that this should show\n> that the table *is* foreign, but without the server - if you want to\n> know the server (or its options), you can run another \\d command for\n> that (or run a SQL query).\n\nBut 'server \"%s\"' is not much longer than \"foreign\", and it's not like\nyour saving any vertical space at all (you're just using space that\nwould otherwise be empty), so I'm not sure it is better. I would vote\nfor showing the server.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"You don't solve a bad join with SELECT DISTINCT\" #CupsOfFail\nhttps://twitter.com/connor_mc_d/status/1431240081726115845\n\n\n",
"msg_date": "Thu, 27 Oct 2022 09:12:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "2022年10月27日(木) 16:12 Alvaro Herrera <alvherre@alvh.no-ip.org>:\n>\n> On 2022-Oct-24, Justin Pryzby wrote:\n>\n> > On Mon, Oct 24, 2022 at 09:44:18PM +0900, Ian Lawrence Barwick wrote:\n>\n> > > + else if (child_relkind == RELKIND_FOREIGN_TABLE && is_partitioned)\n> > > + appendPQExpBuffer(&buf, \", server: \\\"%s\\\"\", PQgetvalue(result, i, 4));\n>\n> > To avoid the clutter that you mentioned, I suggest that this should show\n> > that the table *is* foreign, but without the server - if you want to\n> > know the server (or its options), you can run another \\d command for\n> > that (or run a SQL query).\n>\n> But 'server \"%s\"' is not much longer than \"foreign\", and it's not like\n> your saving any vertical space at all (you're just using space that\n> would otherwise be empty), so I'm not sure it is better. I would vote\n> for showing the server.\n\nIndeed; my particular use-case is being able to see how the (foreign) tablesare\ndistributed over one or more foreign servers, so while being able to see whether\nit's a foreign table or not helps, it's not all that much more disruptive to\ninclude the identity of the server (unless the server's name is maxing out\nNAMEDATALEN, dunno how prevalent that is in the wild, but it's not something\nI've ever felt the need to do).\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Mon, 31 Oct 2022 21:13:25 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 09:44:18PM +0900, Ian Lawrence Barwick wrote:\n> Recently I have been working a lot with partitioned tables which contain a mix\n> of local and foreign partitions, and find it would be very useful to be able to\n> easily obtain an overview of which partitions are foreign and where they are\n> located.\n> \n> Currently, executing \"\\d+\" on a partitioned table lists the partitions\n> like this:\n\nHmm. I am not sure that we should add this much amount of\ninformation, particularly for the server bits. First, worth\nmentioning, pg_partition_tree() is very handy when it comes to know\npartition information, like: \nSELECT relid, relkind\n FROM pg_partition_tree('parttest') p, pg_class c\n where c.oid = p.relid;\n\nAnyway, saying that, we do something similar for partitioned indexes\nand tables with \\d+, aka around L3445:\n if (child_relkind == RELKIND_PARTITIONED_TABLE ||\n child_relkind == RELKIND_PARTITIONED_INDEX)\n appendPQExpBufferStr(&buf, \", PARTITIONED\");\n\nThis is the same, just for a new relkind.\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 14:55:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Oct 24, 2022 at 09:44:18PM +0900, Ian Lawrence Barwick wrote:\n>> Recently I have been working a lot with partitioned tables which contain a mix\n>> of local and foreign partitions, and find it would be very useful to be able to\n>> easily obtain an overview of which partitions are foreign and where they are\n>> located.\n\n> Hmm. I am not sure that we should add this much amount of\n> information, particularly for the server bits.\n\nFWIW, I am also in favor of adding \", FOREIGN\" but no more.\nMy concern is that as submitted, the patch greatly increases\nthe cost of the underlying query by adding two more catalogs\nto the join. I don't think imposing such a cost on everybody\n(whether they use foreign partitions or not) is worth that. But\nwe can add \", FOREIGN\" for free since we have the relkind anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Nov 2022 12:39:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "2022年11月6日(日) 1:39 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Oct 24, 2022 at 09:44:18PM +0900, Ian Lawrence Barwick wrote:\n> >> Recently I have been working a lot with partitioned tables which contain a mix\n> >> of local and foreign partitions, and find it would be very useful to be able to\n> >> easily obtain an overview of which partitions are foreign and where they are\n> >> located.\n>\n> > Hmm. I am not sure that we should add this much amount of\n> > information, particularly for the server bits.\n>\n> FWIW, I am also in favor of adding \", FOREIGN\" but no more.\n> My concern is that as submitted, the patch greatly increases\n> the cost of the underlying query by adding two more catalogs\n> to the join. I don't think imposing such a cost on everybody\n> (whether they use foreign partitions or not) is worth that. But\n> we can add \", FOREIGN\" for free since we have the relkind anyway.\n\nFair enough, make sense.\n\n Revised version added per suggestions, which produces output like this:\n\n postgres=# \\d+ parttest\n Partitioned table \"public.parttest\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n --------+---------+-----------+----------+---------+----------+-------------+--------------+-------------\n id | integer | | not null | | plain |\n | |\n val1 | text | | | | extended |\n | |\n val2 | text | | | | extended |\n | |\n Partition key: HASH (id)\n Partitions: parttest_10_0 FOR VALUES WITH (modulus 10, remainder 0),\n parttest_10_1 FOR VALUES WITH (modulus 10, remainder\n1), FOREIGN,\n parttest_10_2 FOR VALUES WITH (modulus 10, remainder 2),\n parttest_10_3 FOR VALUES WITH (modulus 10, remainder\n3), FOREIGN,\n parttest_10_4 FOR VALUES WITH (modulus 10, remainder 4),\n parttest_10_5 FOR VALUES WITH (modulus 10, remainder\n5), FOREIGN,\n parttest_10_6 FOR VALUES WITH (modulus 10, remainder 6),\n parttest_10_7 FOR VALUES WITH (modulus 10, remainder\n7), FOREIGN,\n parttest_10_8 FOR VALUES WITH (modulus 10, remainder 8),\n parttest_10_9 FOR VALUES WITH (modulus 10, remainder 9), FOREIGN\n\n\nRegards\n\nIan Barwick",
"msg_date": "Sun, 6 Nov 2022 21:23:01 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "On Sun, Nov 06, 2022 at 09:23:01PM +0900, Ian Lawrence Barwick wrote:\n> Fair enough, make sense.\n\nFine by me and the patch looks OK. I'd like to apply this if there\nare no objections.\n--\nMichael",
"msg_date": "Mon, 7 Nov 2022 15:37:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Nov 06, 2022 at 09:23:01PM +0900, Ian Lawrence Barwick wrote:\n>> Fair enough, make sense.\n\n> Fine by me and the patch looks OK. I'd like to apply this if there\n> are no objections.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Nov 2022 01:43:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "On Mon, Nov 07, 2022 at 01:43:22AM -0500, Tom Lane wrote:\n> WFM.\n\nOkay, applied as bd95816, then.\n--\nMichael",
"msg_date": "Tue, 8 Nov 2022 14:49:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "2022年11月8日(火) 14:49 Michael Paquier <michael@paquier.xyz>:\n>\n> On Mon, Nov 07, 2022 at 01:43:22AM -0500, Tom Lane wrote:\n> > WFM.\n>\n> Okay, applied as bd95816, then.\n\nThanks!\n\nCF entry updated accordingly.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 8 Nov 2022 15:38:22 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
},
{
"msg_contents": "On Tue, Nov 08, 2022 at 03:38:22PM +0900, Ian Lawrence Barwick wrote:\n> CF entry updated accordingly.\n\nMissed this part, thanks..\n--\nMichael",
"msg_date": "Wed, 9 Nov 2022 14:06:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Have psql's \\d+ indicate foreign partitions"
}
] |
[
{
"msg_contents": "Hello,\n\nWhen studying the weird planner issue reported here [1], I came up with \nthe attached patch. It reduces the probability of calling \nget_actual_variable_range().\n\nThe patch applies to the master branch.\n\nHow to test :\n\nCREATE TABLE foo (a bigint, b TEXT) WITH (autovacuum_enabled = off);\nINSERT INTO foo SELECT i%213, md5(i::text) from \ngenerate_series(1,1000000) i;\nVACUUM ANALYZE foo;\nSELECT * FROM pg_stats WHERE tablename = 'foo' AND attname='a'\\gx\nCREATE INDEX ON foo(a);\nDELETE FROM foo WHERE a = 212;\nEXPLAIN (BUFFERS) SELECT count(a) FROM foo WHERE a > 208;\n\nWithout this patch, you will observe at least 4694 shared hits (which \nare mostly heap fetches). If you apply the patch, you will observe very \nfew of them.\n\nYou should run the EXPLAIN on a standby, if you want to observe the heap \nfetches more than one time (because of killed index tuples being ignored).\n\nBest regards,\nFrédéric\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAECtzeVPM4Oi6dTdqVQmjoLkDBVChNj7ed3hNs1RGrBbwCJ7Cw%40mail.gmail.com",
"msg_date": "Mon, 24 Oct 2022 17:26:50 +0200",
"msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] minor optimization for ineq_histogram_selectivity()"
},
{
"msg_contents": "\n\nOn 10/24/22 17:26, Frédéric Yhuel wrote:\n> Hello,\n> \n> When studying the weird planner issue reported here [1], I came up with \n> the attached patch. It reduces the probability of calling \n> get_actual_variable_range().\n> \n> The patch applies to the master branch.\n> \n> How to test :\n> \n> CREATE TABLE foo (a bigint, b TEXT) WITH (autovacuum_enabled = off);\n> INSERT INTO foo SELECT i%213, md5(i::text) from \n> generate_series(1,1000000) i;\n> VACUUM ANALYZE foo;\n> SELECT * FROM pg_stats WHERE tablename = 'foo' AND attname='a'\\gx\n> CREATE INDEX ON foo(a);\n> DELETE FROM foo WHERE a = 212;\n> EXPLAIN (BUFFERS) SELECT count(a) FROM foo WHERE a > 208;\n> \n\nWith the above example, the variables \"lobound\", \"hibound\", and \"probe\" \nwould vary like this :\n\nwithout patch :\n\nlobound hibound probe\n---------------------------------------\n0 101 50\n51 101 76\n77 101 89\n90 101 95\n96 101 98\n99 101 100\n99 100 99\n99 99\n\n\nwith patch :\n\nlobound hibound probe\n---------------------------------------\n0 101 50\n51 101 75\n76 101 88\n89 101 94\n95 101 97\n98 101 99\n98 99 98\n99 99\n\nSo we find the correct right end of the histogram bin (99) in both \ncases, but \"probe\" doesn't reach 100 in the latter one, and\nget_actual_variable_range() is never called.\n\nNow, if we'd run the query SELECT count(a) FROM foo WHERE a > 211 :\n\nwithout patch :\n\nlobound hibound probe\n---------------------------------------\n0 101 50\n51 101 76\n77 101 89\n90 101 95\n96 101 98\n99 101 100\n99 100 99\n100 100\n\nwith patch :\n\nlobound hibound probe\n---------------------------------------\n0 101 50\n51 101 75\n76 101 88\n89 101 94\n95 101 97\n98 101 99\n100 101 100\n100 100\n\n\nHere, the correct right end of the histogram bin (100) is also found is \nboth cases.\n\nI'm well aware that an example doesn't prove the correctness of an \nalgorithm, though.\n\nBest regards,\nFrédéric\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:30:33 +0100",
"msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor optimization for ineq_histogram_selectivity()"
},
{
"msg_contents": "\n\nOn 10/24/22 17:26, Frédéric Yhuel wrote:\n> Hello,\n> \n> When studying the weird planner issue reported here [1], I came up with \n> the attached patch. It reduces the probability of calling \n> get_actual_variable_range().\n\nThis isn't very useful anymore thanks to this patch: \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9c6ad5eaa957bdc2132b900a96e0d2ec9264d39c\n\nUnless we want to save a hundred page reads in rare cases.\n\n\n",
"msg_date": "Wed, 23 Nov 2022 16:16:38 +0100",
"msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor optimization for ineq_histogram_selectivity()"
},
{
"msg_contents": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com> writes:\n> On 10/24/22 17:26, Frédéric Yhuel wrote:\n>> When studying the weird planner issue reported here [1], I came up with \n>> the attached patch. It reduces the probability of calling \n>> get_actual_variable_range().\n\n> This isn't very useful anymore thanks to this patch: \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9c6ad5eaa957bdc2132b900a96e0d2ec9264d39c\n\nI hadn't looked at this patch before, but now that I have, I'm inclined\nto reject it anyway. It just moves the problem around: now, instead of\npossibly doing an unnecessary index probe at the right end, you're\npossibly doing an unnecessary index probe at the left end. It also\nlooks quite weird compared to the normal coding of binary search.\n\nI wonder if there'd be something to be said for leaving the initial\nprobe calculation alone and doing this:\n\n else if (probe == sslot.nvalues - 1 && sslot.nvalues > 2)\n+ {\n+ /* Don't probe the endpoint until we have to. */\n+ if (probe > lobound)\n+ probe--;\n+ else\n have_end = get_actual_variable_range(root,\n vardata,\n sslot.staop,\n collation,\n NULL,\n &sslot.values[probe]);\n+ }\n\nOn the whole though, it seems like a wart.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Nov 2022 10:59:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor optimization for ineq_histogram_selectivity()"
},
{
"msg_contents": "\n\nOn 11/23/22 16:59, Tom Lane wrote:\n> =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com> writes:\n>> On 10/24/22 17:26, Frédéric Yhuel wrote:\n>>> When studying the weird planner issue reported here [1], I came up with\n>>> the attached patch. It reduces the probability of calling\n>>> get_actual_variable_range().\n> \n>> This isn't very useful anymore thanks to this patch:\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9c6ad5eaa957bdc2132b900a96e0d2ec9264d39c\n> \n> I hadn't looked at this patch before, but now that I have, I'm inclined\n> to reject it anyway. It just moves the problem around: now, instead of\n> possibly doing an unnecessary index probe at the right end, you're\n> possibly doing an unnecessary index probe at the left end.\n\nIndeed... it seemed to me that both versions would do an unnecessary \nindex probe at the left end, but I wasn't careful enough :-/\n\n> It also\n> looks quite weird compared to the normal coding of binary search.\n> \n\nThat's right.\n\n> I wonder if there'd be something to be said for leaving the initial\n> probe calculation alone and doing this:\n> \n> else if (probe == sslot.nvalues - 1 && sslot.nvalues > 2)\n> + {\n> + /* Don't probe the endpoint until we have to. */\n> + if (probe > lobound)\n> + probe--;\n> + else\n> have_end = get_actual_variable_range(root,\n> vardata,\n> sslot.staop,\n> collation,\n> NULL,\n> &sslot.values[probe]);\n> + }\n> \n> On the whole though, it seems like a wart.\n> \n>\n\nYeah... it's probably wiser not risking introducing a bug, only to save \nan index probe in rare cases (and only 100 reads, thanks to 9c6ad5ea).\n\nThank you for having had a look at it.\n\nBest regards,\nFrédéric\n\n\n\n",
"msg_date": "Thu, 24 Nov 2022 09:52:28 +0100",
"msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor optimization for ineq_histogram_selectivity()"
}
] |
[
{
"msg_contents": "Hi -hackers,\n\nWorking with Stephen, I am attempting to pick up some of the work that\nwas left off with TDE and the key management infrastructure. I have\nrebased Bruce's KMS/TDE patches as they existed on the\nhttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption wiki\npage, which are enclosed in this email.\n\nI would love to open a discussion about how to move forward and get\nsome of these features built out. The historical threads here are\nquite long and complicated; is there a \"current state\" other than the\nwiki that reflects the general thinking on this feature? Any major\ndevelopments in direction that would not be reflected in the code from\nMay 2021?\n\nThanks,\n\nDavid",
"msg_date": "Mon, 24 Oct 2022 11:29:19 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Moving forward with TDE"
},
{
"msg_contents": "Hi David,\n\n> Working with Stephen, I am attempting to pick up some of the work that\n> was left off with TDE and the key management infrastructure. I have\n> rebased Bruce's KMS/TDE patches as they existed on the\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption wiki\n> page, which are enclosed in this email.\n\nI'm happy to see that the TDE effort was picked up.\n\n> I would love to open a discussion about how to move forward and get\n> some of these features built out. The historical threads here are\n> quite long and complicated; is there a \"current state\" other than the\n> wiki that reflects the general thinking on this feature? Any major\n> developments in direction that would not be reflected in the code from\n> May 2021?\n\nThe patches seem to be well documented and decomposed in small pieces.\nThat's good.\n\nUnless somebody in the community remembers open questions/issues with\nTDE that were never addressed I suggest simply iterating with our\nusual testing/reviewing process. For now I'm going to change the\nstatus of the CF entry [1] to \"Waiting for Author\" since the patchset\ndoesn't pass the CI [2].\n\nOne limitation of the design described on the wiki I see is that it\nseems to heavily rely on AES:\n\n> We will use Advanced Encryption Standard (AES) [4]. We will offer three key length options (128, 192, and 256-bits) selected at initdb time with --file-encryption-method\n\n(there doesn't seem to be any mention of the hash/MAC algorithms,\nthat's odd). In the future we should be able to add the support of\nalternative algorithms. The reason is that the algorithms can become\nweak every 20 years or so, and the preferred algorithms may also\ndepend on the region. This should NOT be implemented in this\nparticular patchset, but the design shouldn't prevent from\nimplementing this in the future.\n\n[1]: https://commitfest.postgresql.org/40/3985/\n[2]: http://cfbot.cputube.org/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 3 Nov 2022 17:09:00 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "> Unless somebody in the community remembers open questions/issues with\n> TDE that were never addressed I suggest simply iterating with our\n> usual testing/reviewing process. For now I'm going to change the\n> status of the CF entry [1] to \"Waiting for Author\" since the patchset\n> doesn't pass the CI [2].\n\nThanks, enclosed is a new version that is rebased on HEAD and fixes a\nbug that the new pg_control_init() test picked up.\n\nKnown issues (just discovered by me in testing the latest revision) is\nthat databases created from `template0` are not decrypting properly,\nbut `template1` works fine, so going to dig in on that soon.\n\n> One limitation of the design described on the wiki I see is that it\n> seems to heavily rely on AES:\n>\n> > We will use Advanced Encryption Standard (AES) [4]. We will offer three key length options (128, 192, and 256-bits) selected at initdb time with --file-encryption-method\n>\n> (there doesn't seem to be any mention of the hash/MAC algorithms,\n> that's odd). In the future we should be able to add the support of\n> alternative algorithms. The reason is that the algorithms can become\n> weak every 20 years or so, and the preferred algorithms may also\n> depend on the region. This should NOT be implemented in this\n> particular patchset, but the design shouldn't prevent from\n> implementing this in the future.\n\nYes, we definitely are considering multiple algorithms support as part\nof this effort.\n\nBest,\n\nDavid",
"msg_date": "Thu, 3 Nov 2022 17:06:23 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 3:36 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> > Unless somebody in the community remembers open questions/issues with\n> > TDE that were never addressed I suggest simply iterating with our\n> > usual testing/reviewing process. For now I'm going to change the\n> > status of the CF entry [1] to \"Waiting for Author\" since the patchset\n> > doesn't pass the CI [2].\n>\n> Thanks, enclosed is a new version that is rebased on HEAD and fixes a\n> bug that the new pg_control_init() test picked up.\n\nI was looking into the documentation patches 0001 and 0002, I think\nthe explanation is very clear. I have a few questions/comments\n\n+By not using the database id in the IV, CREATE DATABASE can copy the\n+heap/index files from the old database to a new one without\n+decryption/encryption. Both page copies are valid. Once a database\n+changes its pages, it gets new LSNs, and hence new IV.\n\nHow about the WAL_LOG method for creating a database? because in that\nwe get the new LSN for the pages in the new database, so do we\nreencrypt, if yes then this documentation needs to be updated\notherwise we might need to add that code.\n\n+changes its pages, it gets new LSNs, and hence new IV. Using only the\n+LSN and page number also avoids requiring pg_upgrade to preserve\n+database oids, tablespace oids, and relfilenodes.\n\nI think this line needs to be changed, because now we are already\npreserving dbid/tbsid/relfilenode. So even though we are not using\nthose in IV there is no point in saying we are avoiding that\nrequirement.\n\nI will review the remaining patches soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Nov 2022 14:12:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 9:29 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n> I would love to open a discussion about how to move forward and get\n> some of these features built out. The historical threads here are\n> quite long and complicated; is there a \"current state\" other than the\n> wiki that reflects the general thinking on this feature? Any major\n> developments in direction that would not be reflected in the code from\n> May 2021?\n\nI don't think the patchset here has incorporated the results of the\ndiscussion [1] that happened at the end of 2021. For example, it looks\nlike AES-CTR is still in use for the pages, which I thought was\nalready determined to be insufficient.\n\nThe following next steps were proposed in that thread:\n\n> 1. modify temporary file I/O to use a more centralized API\n> 2. modify the existing cluster file encryption patch to use XTS with a\n> IV that uses more than the LSN\n> 3. add XTS regression test code like CTR\n> 4. create WAL encryption code using CTR\n\nDoes this patchset need review before those steps are taken (or was\nthere additional conversation/work that I missed)?\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/20211013222648.GA373%40momjian.us\n\n\n",
"msg_date": "Tue, 15 Nov 2022 11:07:54 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "\n> On Nov 15, 2022, at 1:08 PM, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> On Mon, Oct 24, 2022 at 9:29 AM David Christensen\n> <david.christensen@crunchydata.com> wrote:\n>> I would love to open a discussion about how to move forward and get\n>> some of these features built out. The historical threads here are\n>> quite long and complicated; is there a \"current state\" other than the\n>> wiki that reflects the general thinking on this feature? Any major\n>> developments in direction that would not be reflected in the code from\n>> May 2021?\n> \n> I don't think the patchset here has incorporated the results of the\n> discussion [1] that happened at the end of 2021. For example, it looks\n> like AES-CTR is still in use for the pages, which I thought was\n> already determined to be insufficient.\n\nGood to know about the next steps, thanks. \n\n> The following next steps were proposed in that thread:\n> \n>> 1. modify temporary file I/O to use a more centralized API\n>> 2. modify the existing cluster file encryption patch to use XTS with a\n>> IV that uses more than the LSN\n>> 3. add XTS regression test code like CTR\n>> 4. create WAL encryption code using CTR\n> \n> Does this patchset need review before those steps are taken (or was\n> there additional conversation/work that I missed)?\n\nThis was just a refresh of the old patches on the wiki to work as written on HEAD. If there are known TODOs here this then that work is still needing to be done. \n\nI was going to take 2) and Stephen was going to work on 3); I am not sure about the other two but will review the thread you pointed to. Thanks for pointing that out. \n\nDavid\n\n\n\n",
"msg_date": "Tue, 15 Nov 2022 13:39:27 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 11:39 AM David Christensen\n<david.christensen@crunchydata.com> wrote:\n> Good to know about the next steps, thanks.\n\nYou're welcome!\n\n> This was just a refresh of the old patches on the wiki to work as written on HEAD. If there are known TODOs here this then that work is still needing to be done.\n>\n> I was going to take 2) and Stephen was going to work on 3); I am not sure about the other two but will review the thread you pointed to. Thanks for pointing that out.\n\nI've attached the diffs I'm carrying to build this under meson (as\nwell as -Wshadow; my removal of the two variables probably needs some\nscrutiny). It looks like the testcrypto executable will need\nsubstantial changes after the common/hex.h revert.\n\n--Jacob",
"msg_date": "Tue, 15 Nov 2022 12:08:40 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Hi Jacob,\n\nThanks, I've added this patch in my tree [1]. (For now, just adding\nfixes and the like atop the original separate patches, but will\neventually get things winnowed down into probably the same 12 parts\nthe originals were reviewed in.\n\nBest,\n\nDavid\n\n[1] https://github.com/pgguru/postgres/tree/tde\n\n\n",
"msg_date": "Thu, 17 Nov 2022 10:02:05 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Hi Dilip,\n\nThanks for the feedback here. I will review the docs changes and add to my tree.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Thu, 17 Nov 2022 10:34:48 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Fri, 4 Nov 2022 at 03:36, David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> > Unless somebody in the community remembers open questions/issues with\n> > TDE that were never addressed I suggest simply iterating with our\n> > usual testing/reviewing process. For now I'm going to change the\n> > status of the CF entry [1] to \"Waiting for Author\" since the patchset\n> > doesn't pass the CI [2].\n>\n> Thanks, enclosed is a new version that is rebased on HEAD and fixes a\n> bug that the new pg_control_init() test picked up.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nb82557ecc2ebbf649142740a1c5ce8d19089f620 ===\n=== applying patch\n./v2-0004-cfe-04-common_over_cfe-03-scripts-squash-commit.patch\npatching file src/common/Makefile\nHunk #2 FAILED at 84.\n1 out of 2 hunks FAILED -- saving rejects to file src/common/Makefile.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3985.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:57:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nI have decided to write a review here in terms of whether we want this feature, and perhaps the way we should look at encryption as a project down the road, since I think this is only the beginning. I am hoping to run some full tests of the feature sometime in coming weeks. Right now this review is limited to the documentation and documented feature.\r\n\r\nFrom the documentation, the primary threat model of TDE is to prevent decryption of data from archived wal segments (and data files), for example on a backup system. While there are other methods around this problem to date, I think that this feature is worth pursuing for that reason. I want to address a couple of reasons for this and then go into some reservations I have about how some of this is documented.\r\n\r\nThere are current workarounds to ensuring encryption at rest, but these have a number of problems. Encryption passphrases end up lying around the system in various places. Key rotation is often difficult. And one mistake can easily render all efforts ineffective. TDE solves these problems. The overall design from the internal docs looks solid. This definitely is something I would recommend for many users.\r\n\r\nI have a couple small caveats though. Encryption of data is a large topic and there isn't a one-size-fits-all solution to industrial or state requirements. Having all this key management available in PostgreSQL is a very good thing. Long run it is likely to end up being extensible, and therefore both more powerful and offering a wider range of choices for solution architects. Implementing encryption is also something that is easy to mess up. For this reason I think it would be great if we had a standardized format for discussing encryption options that we could use going forward. I don't think that should be held against this patch but I think we need to start discussing it now because it will be a bigger problem later.\r\n\r\nA second caveat I have is that key management is a topic where you really need a good overview of internals in order to implement effectively. If you don't know how an SSL handshake works or what is in a certificate, you can easily make mistakes in setting up SSL. I can see the same thing happening here. For example, I don't think it would be safe to leave the KEK on an encrypted filesystem that is decrypted at runtime (or at least I wouldn't consider that safe -- your appetite for risk may vary).\r\n\r\nMy proposal would be to have build a template for encryption options in the documentation. This could include topics like SSL as well. In such a template we'd have sections like \"Threat model,\" \"How it works,\" \"Implementation Requirements\" and so forth. Again I don't think this needs to be part of the current patch but I think it is something we need to start thinking about now. Maybe after this goes in, I can present a proposed documentation patch.\r\n\r\nI will also note that I don't consider myself to be very qualified on topics like encryption. I can reason about key management to some extent but some implementation details may be beyond me. I would hope we could get some extra review on this patch set soon.",
"msg_date": "Tue, 07 Mar 2023 03:07:16 +0000",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Greetings,\n\n* Chris Travers (chris.travers@gmail.com) wrote:\n> From the documentation, the primary threat model of TDE is to prevent decryption of data from archived wal segments (and data files), for example on a backup system. While there are other methods around this problem to date, I think that this feature is worth pursuing for that reason. I want to address a couple of reasons for this and then go into some reservations I have about how some of this is documented.\n\nAgreed, though the latest efforts include an option for *authenticated*\nencryption as well as unauthenticated. That makes it much more\ndifficult to make undetected changes to the data that's protected by\nthe authenticated encryption being used.\n\n> There are current workarounds to ensuring encryption at rest, but these have a number of problems. Encryption passphrases end up lying around the system in various places. Key rotation is often difficult. And one mistake can easily render all efforts ineffective. TDE solves these problems. The overall design from the internal docs looks solid. This definitely is something I would recommend for many users.\n\nThere's clearly user demand for it as there's a number of organizations\nwho have forks which are providing it in one shape or another. This\nkind of splintering of the community is actually an actively bad thing\nfor the project and is part of what killed Unix, by at least some pretty\nreputable accounts, in my view.\n\n> I have a couple small caveats though. Encryption of data is a large topic and there isn't a one-size-fits-all solution to industrial or state requirements. Having all this key management available in PostgreSQL is a very good thing. Long run it is likely to end up being extensible, and therefore both more powerful and offering a wider range of choices for solution architects. Implementing encryption is also something that is easy to mess up. For this reason I think it would be great if we had a standardized format for discussing encryption options that we could use going forward. I don't think that should be held against this patch but I think we need to start discussing it now because it will be a bigger problem later.\n\nDo you have a suggestion as to the format to use?\n\n> A second caveat I have is that key management is a topic where you really need a good overview of internals in order to implement effectively. If you don't know how an SSL handshake works or what is in a certificate, you can easily make mistakes in setting up SSL. I can see the same thing happening here. For example, I don't think it would be safe to leave the KEK on an encrypted filesystem that is decrypted at runtime (or at least I wouldn't consider that safe -- your appetite for risk may vary).\n\nAgreed that we should document this and make clear that the KEK is\nnecessary for server start but absolutely should be kept as safe as\npossible and certainly not stored on disk somewhere nearby the encrypted\ncluster.\n\n> My proposal would be to have build a template for encryption options in the documentation. This could include topics like SSL as well. In such a template we'd have sections like \"Threat model,\" \"How it works,\" \"Implementation Requirements\" and so forth. Again I don't think this needs to be part of the current patch but I think it is something we need to start thinking about now. Maybe after this goes in, I can present a proposed documentation patch.\n\nI'm not entirely sure that it makes sense to lump this and TLS in the\nsame place as they end up being rather independent at the end of the\nday. If you have ideas for how to improve the documentation, I'd\ncertainly encourage you to go ahead and work on that and submit it as a\npatch rather than waiting for this to actually land in core. Having\ngood and solid documentation is something that will help this get in,\nafter all, and to the extent that it's covering existing topics like\nTLS, those could likely be included independently and that would be of\nbenefit to everyone.\n\n> I will also note that I don't consider myself to be very qualified on topics like encryption. I can reason about key management to some extent but some implementation details may be beyond me. I would hope we could get some extra review on this patch set soon.\n\nCertainly agree with you there though there's an overall trajectory of\npatches involved in all of this that's a bit deep. The plan is to\ndiscuss that at PGCon (On the Road to TDE) and at the PGCon\nUnconference after. I certainly hope those interested will be there.\nI'm also happy to have a call with anyone interested in this effort\nindependent of that, of course.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 8 Mar 2023 16:25:04 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 04:25:04PM -0500, Stephen Frost wrote:\n> Agreed, though the latest efforts include an option for *authenticated*\n> encryption as well as unauthenticated. That makes it much more\n> difficult to make undetected changes to the data that's protected by\n> the authenticated encryption being used.\n\nI thought some more about this. GCM-style authentication of encrypted\ndata has value because it assumes the two end points are secure but that\na malicious actor could modify data during transfer. In the Postgres\ncase, it seems the two end points and the transfer are all in the same\nplace. Therefore, it is unclear to me the value of using GCM-style\nauthentication because if the GCM-level can be modified, so can the end\npoints, and the encryption key exposed.\n\n> There's clearly user demand for it as there's a number of organizations\n> who have forks which are providing it in one shape or another. This\n> kind of splintering of the community is actually an actively bad thing\n> for the project and is part of what killed Unix, by at least some pretty\n> reputable accounts, in my view.\n\nYes, the number of commercial implementations of this is a concern. Of\ncourse, it is also possible that those commercial implementations are\nmeeting checkbox requirements rather than technical ones, and the\ncommunity has been hostile to check box-only features.\n\n> Certainly agree with you there though there's an overall trajectory of\n> patches involved in all of this that's a bit deep. The plan is to\n> discuss that at PGCon (On the Road to TDE) and at the PGCon\n> Unconference after. I certainly hope those interested will be there.\n> I'm also happy to have a call with anyone interested in this effort\n> independent of that, of course.\n\nI will not be attending Ottawa.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Mon, 27 Mar 2023 12:38:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Mar 27, 2023 at 12:38 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Mar 8, 2023 at 04:25:04PM -0500, Stephen Frost wrote:\n> > Agreed, though the latest efforts include an option for *authenticated*\n> > encryption as well as unauthenticated. That makes it much more\n> > difficult to make undetected changes to the data that's protected by\n> > the authenticated encryption being used.\n>\n> I thought some more about this. GCM-style authentication of encrypted\n> data has value because it assumes the two end points are secure but that\n> a malicious actor could modify data during transfer. In the Postgres\n> case, it seems the two end points and the transfer are all in the same\n> place. Therefore, it is unclear to me the value of using GCM-style\n> authentication because if the GCM-level can be modified, so can the end\n> points, and the encryption key exposed.\n\n\nWhat are the two end points you are referring to and why don’t you feel\nthere is an opportunity between them for a malicious actor to attack the\nsystem?\n\nThere are simpler cases to consider than an online attack on a single\nindependent system where an attacker having access to modify the data in\ntransit between PG and the storage would imply the attacker also having\naccess to read keys out of PG’s memory.\n\nAs specific examples, consider:\n\nAn attack against the database system where the database server is shut\ndown, or a backup, and the encryption key isn’t available on the system.\n\nThe backup system itself, not running as the PG user (an option supported\nby PG and at least pgbackrest) being compromised, thus allowing for\ninjection of changes into a backup or into a restore.\n\nThe beginning of this discussion also very clearly had individuals voicing\nstrong opinions that unauthenticated encryption methods were not acceptable\nas an end-state for PG due to the clear issue of there then being no\nprotection against modification of data. The approach we are working\ntowards provides both the unauthenticated option, which clearly has value\nto a large number of our collective user base considering the number of\ncommercial implementations which have now arisen, and the authenticated\nsolution which goes further and provides the level clearly expected of the\nPG community. This gets us a win-win situation.\n\n> There's clearly user demand for it as there's a number of organizations\n> > who have forks which are providing it in one shape or another. This\n> > kind of splintering of the community is actually an actively bad thing\n> > for the project and is part of what killed Unix, by at least some pretty\n> > reputable accounts, in my view.\n>\n> Yes, the number of commercial implementations of this is a concern. Of\n> course, it is also possible that those commercial implementations are\n> meeting checkbox requirements rather than technical ones, and the\n> community has been hostile to check box-only features.\n\n\nI’ve grown weary of this argument as the other major piece of work it was\nroutinely applied to was RLS and yet that has certainly been seen broadly\nas a beneficial feature with users clearly leveraging it and in more than\nsome “checkbox” way.\n\nIndeed, it’s similar also in that commercial implementations were done of\nRLS while there were arguments made about it being a checkbox feature which\nwere used to discourage it from being implemented in core. Were it truly\ncheckbox, I don’t feel we would have the regular and ongoing discussion\nabout it on the lists that we do, nor see other tools built on top of PG\nwhich specifically leverage it. Perhaps there are truly checkbox features\nout there which we will never implement, but I’m (perhaps due to what my\ndad would call selective listening on my part, perhaps not) having trouble\ncoming up with any presently. Features that exist in other systems that we\ndon’t want? Certainly. We don’t characterize those as simply “checkbox”\nthough. Perhaps that’s in part because we provide alternatives- but that’s\nnot the case here. We have no comparable way to have this capability as\npart of the core system.\n\nWe, as a community, are clearly losing value by lack of this capability, if\nby no other measure than simply the numerous users of the commercial\nimplementations feeling that they simply can’t use PG without this feature,\nfor whatever their reasoning.\n\nThanks,\n\nStephen\n\nGreetings,On Mon, Mar 27, 2023 at 12:38 Bruce Momjian <bruce@momjian.us> wrote:On Wed, Mar 8, 2023 at 04:25:04PM -0500, Stephen Frost wrote:\n> Agreed, though the latest efforts include an option for *authenticated*\n> encryption as well as unauthenticated. That makes it much more\n> difficult to make undetected changes to the data that's protected by\n> the authenticated encryption being used.\n\nI thought some more about this. GCM-style authentication of encrypted\ndata has value because it assumes the two end points are secure but that\na malicious actor could modify data during transfer. In the Postgres\ncase, it seems the two end points and the transfer are all in the same\nplace. Therefore, it is unclear to me the value of using GCM-style\nauthentication because if the GCM-level can be modified, so can the end\npoints, and the encryption key exposed.What are the two end points you are referring to and why don’t you feel there is an opportunity between them for a malicious actor to attack the system?There are simpler cases to consider than an online attack on a single independent system where an attacker having access to modify the data in transit between PG and the storage would imply the attacker also having access to read keys out of PG’s memory. As specific examples, consider:An attack against the database system where the database server is shut down, or a backup, and the encryption key isn’t available on the system.The backup system itself, not running as the PG user (an option supported by PG and at least pgbackrest) being compromised, thus allowing for injection of changes into a backup or into a restore.The beginning of this discussion also very clearly had individuals voicing strong opinions that unauthenticated encryption methods were not acceptable as an end-state for PG due to the clear issue of there then being no protection against modification of data. The approach we are working towards provides both the unauthenticated option, which clearly has value to a large number of our collective user base considering the number of commercial implementations which have now arisen, and the authenticated solution which goes further and provides the level clearly expected of the PG community. This gets us a win-win situation.\n> There's clearly user demand for it as there's a number of organizations\n> who have forks which are providing it in one shape or another. This\n> kind of splintering of the community is actually an actively bad thing\n> for the project and is part of what killed Unix, by at least some pretty\n> reputable accounts, in my view.\n\nYes, the number of commercial implementations of this is a concern. Of\ncourse, it is also possible that those commercial implementations are\nmeeting checkbox requirements rather than technical ones, and the\ncommunity has been hostile to check box-only features.I’ve grown weary of this argument as the other major piece of work it was routinely applied to was RLS and yet that has certainly been seen broadly as a beneficial feature with users clearly leveraging it and in more than some “checkbox” way.Indeed, it’s similar also in that commercial implementations were done of RLS while there were arguments made about it being a checkbox feature which were used to discourage it from being implemented in core. Were it truly checkbox, I don’t feel we would have the regular and ongoing discussion about it on the lists that we do, nor see other tools built on top of PG which specifically leverage it. Perhaps there are truly checkbox features out there which we will never implement, but I’m (perhaps due to what my dad would call selective listening on my part, perhaps not) having trouble coming up with any presently. Features that exist in other systems that we don’t want? Certainly. We don’t characterize those as simply “checkbox” though. Perhaps that’s in part because we provide alternatives- but that’s not the case here. We have no comparable way to have this capability as part of the core system.We, as a community, are clearly losing value by lack of this capability, if by no other measure than simply the numerous users of the commercial implementations feeling that they simply can’t use PG without this feature, for whatever their reasoning.Thanks,Stephen",
"msg_date": "Tue, 28 Mar 2023 00:01:56 +0200",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 12:01:56AM +0200, Stephen Frost wrote:\n> Greetings,\n> \n> On Mon, Mar 27, 2023 at 12:38 Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Mar 8, 2023 at 04:25:04PM -0500, Stephen Frost wrote:\n> > Agreed, though the latest efforts include an option for *authenticated*\n> > encryption as well as unauthenticated. That makes it much more\n> > difficult to make undetected changes to the data that's protected by\n> > the authenticated encryption being used.\n> \n> I thought some more about this. GCM-style authentication of encrypted\n> data has value because it assumes the two end points are secure but that\n> a malicious actor could modify data during transfer. In the Postgres\n> case, it seems the two end points and the transfer are all in the same\n> place. Therefore, it is unclear to me the value of using GCM-style\n> authentication because if the GCM-level can be modified, so can the end\n> points, and the encryption key exposed.\n> \n> \n> What are the two end points you are referring to and why don’t you feel there\n> is an opportunity between them for a malicious actor to attack the system?\n\nUh, TLS can use GCM and in this case you assume the sender and receiver\nare secure, no?\n\n> There are simpler cases to consider than an online attack on a single\n> independent system where an attacker having access to modify the data in\n> transit between PG and the storage would imply the attacker also having access\n> to read keys out of PG’s memory. \n\nI consider the operating system and its processes as much more of a\nsingle entity than TLS over a network.\n\n> As specific examples, consider:\n> \n> An attack against the database system where the database server is shut down,\n> or a backup, and the encryption key isn’t available on the system.\n> \n> The backup system itself, not running as the PG user (an option supported by PG\n> and at least pgbackrest) being compromised, thus allowing for injection of\n> changes into a backup or into a restore.\n\nI then question why we are not adding encryption to pg_basebackup or\npgbackrest rather than the database system.\n\n> The beginning of this discussion also very clearly had individuals voicing\n> strong opinions that unauthenticated encryption methods were not acceptable as\n> an end-state for PG due to the clear issue of there then being no protection\n> against modification of data. The approach we are working towards provides\n\nWhat were the _technical_ reasons for those objections?\n\n> both the unauthenticated option, which clearly has value to a large number of\n> our collective user base considering the number of commercial implementations\n> which have now arisen, and the authenticated solution which goes further and\n> provides the level clearly expected of the PG community. This gets us a win-win\n> situation.\n> \n> > There's clearly user demand for it as there's a number of organizations\n> > who have forks which are providing it in one shape or another. This\n> > kind of splintering of the community is actually an actively bad thing\n> > for the project and is part of what killed Unix, by at least some pretty\n> > reputable accounts, in my view.\n> \n> Yes, the number of commercial implementations of this is a concern. Of\n> course, it is also possible that those commercial implementations are\n> meeting checkbox requirements rather than technical ones, and the\n> community has been hostile to check box-only features.\n> \n> \n> I’ve grown weary of this argument as the other major piece of work it was\n> routinely applied to was RLS and yet that has certainly been seen broadly as a\n> beneficial feature with users clearly leveraging it and in more than some\n> “checkbox” way.\n\nRLS has to overcome that objection, and I think it did, as was better\nfor doing that.\n\n> We, as a community, are clearly losing value by lack of this capability, if by\n> no other measure than simply the numerous users of the commercial\n> implementations feeling that they simply can’t use PG without this feature, for\n> whatever their reasoning.\n\nThat is true, but I go back to my concern over useful feature vs. check\nbox.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Mon, 27 Mar 2023 18:16:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Mar 27, 2023 at 18:17 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Mar 28, 2023 at 12:01:56AM +0200, Stephen Frost wrote:\n> > Greetings,\n> >\n> > On Mon, Mar 27, 2023 at 12:38 Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, Mar 8, 2023 at 04:25:04PM -0500, Stephen Frost wrote:\n> > > Agreed, though the latest efforts include an option for\n> *authenticated*\n> > > encryption as well as unauthenticated. That makes it much more\n> > > difficult to make undetected changes to the data that's protected\n> by\n> > > the authenticated encryption being used.\n> >\n> > I thought some more about this. GCM-style authentication of\n> encrypted\n> > data has value because it assumes the two end points are secure but\n> that\n> > a malicious actor could modify data during transfer. In the Postgres\n> > case, it seems the two end points and the transfer are all in the\n> same\n> > place. Therefore, it is unclear to me the value of using GCM-style\n> > authentication because if the GCM-level can be modified, so can the\n> end\n> > points, and the encryption key exposed.\n> >\n> >\n> > What are the two end points you are referring to and why don’t you feel\n> there\n> > is an opportunity between them for a malicious actor to attack the\n> system?\n>\n> Uh, TLS can use GCM and in this case you assume the sender and receiver\n> are secure, no?\n\n\nTLS does use GCM.. pretty much exclusively as far as I can recall. So do a\nlot of other things though..\n\n> There are simpler cases to consider than an online attack on a single\n> > independent system where an attacker having access to modify the data in\n> > transit between PG and the storage would imply the attacker also having\n> access\n> > to read keys out of PG’s memory.\n>\n> I consider the operating system and its processes as much more of a\n> single entity than TLS over a network.\n\n\nThis may be the case sometimes but there’s absolutely no shortage of other\ncases and it’s almost more the rule these days, that there is some kind of\nnetwork between the OS processes and the storage- a SAN, an iSCSI network,\nNFS, are all quite common.\n\n> As specific examples, consider:\n> >\n> > An attack against the database system where the database server is shut\n> down,\n> > or a backup, and the encryption key isn’t available on the system.\n> >\n> > The backup system itself, not running as the PG user (an option\n> supported by PG\n> > and at least pgbackrest) being compromised, thus allowing for injection\n> of\n> > changes into a backup or into a restore.\n>\n> I then question why we are not adding encryption to pg_basebackup or\n> pgbackrest rather than the database system.\n\n\nPgbackrest has encryption and authentication of it … but that doesn’t\nactually address the attack vector that I outlined. If the backup user is\ncompromised then they can change the data before it gets to the storage.\nIf the backup user is compromised then they have access to whatever key is\nused to encrypt and authenticate the backup and therefore can trivially\nmanipulate the data.\n\nEncryption of backups by the backup tool serves to protect the data after\nit leaves the backup system and is stored in cloud storage or in whatever\nformat the repository takes. This is beneficial, particularly when the\ndata itself offers no protection, but simply not the same.\n\n> The beginning of this discussion also very clearly had individuals voicing\n> > strong opinions that unauthenticated encryption methods were not\n> acceptable as\n> > an end-state for PG due to the clear issue of there then being no\n> protection\n> > against modification of data. The approach we are working towards\n> provides\n>\n> What were the _technical_ reasons for those objections?\n\n\nI believe largely the ones I’m bringing up here and which I outline above…\nI don’t mean to pretend that any of this is of my own independent\nconstruction. I don’t believe it is and my apologies if it came across that\nway.\n\n> both the unauthenticated option, which clearly has value to a large\n> number of\n> > our collective user base considering the number of commercial\n> implementations\n> > which have now arisen, and the authenticated solution which goes further\n> and\n> > provides the level clearly expected of the PG community. This gets us a\n> win-win\n> > situation.\n> >\n> > > There's clearly user demand for it as there's a number of\n> organizations\n> > > who have forks which are providing it in one shape or another.\n> This\n> > > kind of splintering of the community is actually an actively bad\n> thing\n> > > for the project and is part of what killed Unix, by at least some\n> pretty\n> > > reputable accounts, in my view.\n> >\n> > Yes, the number of commercial implementations of this is a concern.\n> Of\n> > course, it is also possible that those commercial implementations are\n> > meeting checkbox requirements rather than technical ones, and the\n> > community has been hostile to check box-only features.\n> >\n> >\n> > I’ve grown weary of this argument as the other major piece of work it was\n> > routinely applied to was RLS and yet that has certainly been seen\n> broadly as a\n> > beneficial feature with users clearly leveraging it and in more than some\n> > “checkbox” way.\n>\n> RLS has to overcome that objection, and I think it did, as was better\n> for doing that.\n\n\nBeyond it being called a checkbox - what were the arguments against it? I\ndon’t object to being challenged to point out the use cases, but I feel\nthat at least some very clear and straight forward ones are outlined from\nwhat has been said above. I also don’t believe those are the only ones but\nI don’t think I could enumerate every use case for RLS either, even after\nseeing it used for quite a few years. I do seriously question the level of\neffort expected of features that are claimed to be “Checkbox” and tossed\nalmost exclusively for that reason on this list given the success of the\nones that have been accepted and are in active use by our users today.\n\n> We, as a community, are clearly losing value by lack of this capability,\n> if by\n> > no other measure than simply the numerous users of the commercial\n> > implementations feeling that they simply can’t use PG without this\n> feature, for\n> > whatever their reasoning.\n>\n> That is true, but I go back to my concern over useful feature vs. check\n> box.\n\n\nWhile it’s easy to label something as checkbox, I don’t feel we have been\nfair to our users in doing so as it has historically prevented features\nwhich our users are demanding and end up getting from commercial providers\nuntil we implement them ultimately anyway. This particular argument simply\ndoesn’t seem to actually hold the value that proponents of it claim, for us\nat least, and we have clear counter-examples which we can point to and I\nhope we learn from those.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Mon, Mar 27, 2023 at 18:17 Bruce Momjian <bruce@momjian.us> wrote:On Tue, Mar 28, 2023 at 12:01:56AM +0200, Stephen Frost wrote:\n> Greetings,\n> \n> On Mon, Mar 27, 2023 at 12:38 Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Mar 8, 2023 at 04:25:04PM -0500, Stephen Frost wrote:\n> > Agreed, though the latest efforts include an option for *authenticated*\n> > encryption as well as unauthenticated. That makes it much more\n> > difficult to make undetected changes to the data that's protected by\n> > the authenticated encryption being used.\n> \n> I thought some more about this. GCM-style authentication of encrypted\n> data has value because it assumes the two end points are secure but that\n> a malicious actor could modify data during transfer. In the Postgres\n> case, it seems the two end points and the transfer are all in the same\n> place. Therefore, it is unclear to me the value of using GCM-style\n> authentication because if the GCM-level can be modified, so can the end\n> points, and the encryption key exposed.\n> \n> \n> What are the two end points you are referring to and why don’t you feel there\n> is an opportunity between them for a malicious actor to attack the system?\n\nUh, TLS can use GCM and in this case you assume the sender and receiver\nare secure, no?TLS does use GCM.. pretty much exclusively as far as I can recall. So do a lot of other things though..\n> There are simpler cases to consider than an online attack on a single\n> independent system where an attacker having access to modify the data in\n> transit between PG and the storage would imply the attacker also having access\n> to read keys out of PG’s memory. \n\nI consider the operating system and its processes as much more of a\nsingle entity than TLS over a network.This may be the case sometimes but there’s absolutely no shortage of other cases and it’s almost more the rule these days, that there is some kind of network between the OS processes and the storage- a SAN, an iSCSI network, NFS, are all quite common.\n> As specific examples, consider:\n> \n> An attack against the database system where the database server is shut down,\n> or a backup, and the encryption key isn’t available on the system.\n> \n> The backup system itself, not running as the PG user (an option supported by PG\n> and at least pgbackrest) being compromised, thus allowing for injection of\n> changes into a backup or into a restore.\n\nI then question why we are not adding encryption to pg_basebackup or\npgbackrest rather than the database system.Pgbackrest has encryption and authentication of it … but that doesn’t actually address the attack vector that I outlined. If the backup user is compromised then they can change the data before it gets to the storage. If the backup user is compromised then they have access to whatever key is used to encrypt and authenticate the backup and therefore can trivially manipulate the data.Encryption of backups by the backup tool serves to protect the data after it leaves the backup system and is stored in cloud storage or in whatever format the repository takes. This is beneficial, particularly when the data itself offers no protection, but simply not the same.\n> The beginning of this discussion also very clearly had individuals voicing\n> strong opinions that unauthenticated encryption methods were not acceptable as\n> an end-state for PG due to the clear issue of there then being no protection\n> against modification of data. The approach we are working towards provides\n\nWhat were the _technical_ reasons for those objections?I believe largely the ones I’m bringing up here and which I outline above… I don’t mean to pretend that any of this is of my own independent construction. I don’t believe it is and my apologies if it came across that way.\n> both the unauthenticated option, which clearly has value to a large number of\n> our collective user base considering the number of commercial implementations\n> which have now arisen, and the authenticated solution which goes further and\n> provides the level clearly expected of the PG community. This gets us a win-win\n> situation.\n> \n> > There's clearly user demand for it as there's a number of organizations\n> > who have forks which are providing it in one shape or another. This\n> > kind of splintering of the community is actually an actively bad thing\n> > for the project and is part of what killed Unix, by at least some pretty\n> > reputable accounts, in my view.\n> \n> Yes, the number of commercial implementations of this is a concern. Of\n> course, it is also possible that those commercial implementations are\n> meeting checkbox requirements rather than technical ones, and the\n> community has been hostile to check box-only features.\n> \n> \n> I’ve grown weary of this argument as the other major piece of work it was\n> routinely applied to was RLS and yet that has certainly been seen broadly as a\n> beneficial feature with users clearly leveraging it and in more than some\n> “checkbox” way.\n\nRLS has to overcome that objection, and I think it did, as was better\nfor doing that.Beyond it being called a checkbox - what were the arguments against it? I don’t object to being challenged to point out the use cases, but I feel that at least some very clear and straight forward ones are outlined from what has been said above. I also don’t believe those are the only ones but I don’t think I could enumerate every use case for RLS either, even after seeing it used for quite a few years. I do seriously question the level of effort expected of features that are claimed to be “Checkbox” and tossed almost exclusively for that reason on this list given the success of the ones that have been accepted and are in active use by our users today.\n> We, as a community, are clearly losing value by lack of this capability, if by\n> no other measure than simply the numerous users of the commercial\n> implementations feeling that they simply can’t use PG without this feature, for\n> whatever their reasoning.\n\nThat is true, but I go back to my concern over useful feature vs. check\nbox.While it’s easy to label something as checkbox, I don’t feel we have been fair to our users in doing so as it has historically prevented features which our users are demanding and end up getting from commercial providers until we implement them ultimately anyway. This particular argument simply doesn’t seem to actually hold the value that proponents of it claim, for us at least, and we have clear counter-examples which we can point to and I hope we learn from those.Thanks!Stephen",
"msg_date": "Tue, 28 Mar 2023 00:57:42 +0200",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 12:57:42AM +0200, Stephen Frost wrote:\n> I consider the operating system and its processes as much more of a\n> single entity than TLS over a network.\n> \n> This may be the case sometimes but there’s absolutely no shortage of other\n> cases and it’s almost more the rule these days, that there is some kind of\n> network between the OS processes and the storage- a SAN, an iSCSI network, NFS,\n> are all quite common.\n\nYes, but consider that the database cluster is having to get its data\nfrom that remote storage --- the remote storage is not an independent\nentity that can be corrupted without the databaes server being\ncompromised. If everything in PGDATA was GCM-verified, it would be\nsecure, but because some parts are not, I don't think it would be.\n\n> > As specific examples, consider:\n> >\n> > An attack against the database system where the database server is shut\n> down,\n> > or a backup, and the encryption key isn’t available on the system.\n> >\n> > The backup system itself, not running as the PG user (an option supported\n> by PG\n> > and at least pgbackrest) being compromised, thus allowing for injection\n> of\n> > changes into a backup or into a restore.\n> \n> I then question why we are not adding encryption to pg_basebackup or\n> pgbackrest rather than the database system.\n> \n> Pgbackrest has encryption and authentication of it … but that doesn’t actually\n> address the attack vector that I outlined. If the backup user is compromised\n> then they can change the data before it gets to the storage. If the backup\n> user is compromised then they have access to whatever key is used to encrypt\n> and authenticate the backup and therefore can trivially manipulate the data.\n\nSo the idea is that the backup user can be compromised without the data\nbeing vulnerable --- makes sense, though that use-case seems narrow.\n\n> What were the _technical_ reasons for those objections?\n> \n> I believe largely the ones I’m bringing up here and which I outline above… I\n> don’t mean to pretend that any of this is of my own independent construction. I\n> don’t believe it is and my apologies if it came across that way.\n\nYes, there is value beyond the check-box, but in most cases those\nvalues are limited considering the complexity of the features, and the\ncheck-box is what most people are asking for, I think.\n\n> > I’ve grown weary of this argument as the other major piece of work it was\n> > routinely applied to was RLS and yet that has certainly been seen broadly\n> as a\n> > beneficial feature with users clearly leveraging it and in more than some\n> > “checkbox” way.\n> \n> RLS has to overcome that objection, and I think it did, as was better\n> for doing that.\n> \n> Beyond it being called a checkbox - what were the arguments against it? I\n\nThe RLS arguments were that queries could expoose some of the underlying\ndata, but in summary, that was considered acceptable.\n\n> > We, as a community, are clearly losing value by lack of this capability,\n> if by\n> > no other measure than simply the numerous users of the commercial\n> > implementations feeling that they simply can’t use PG without this\n> feature, for\n> > whatever their reasoning.\n> \n> That is true, but I go back to my concern over useful feature vs. check\n> box.\n> \n> While it’s easy to label something as checkbox, I don’t feel we have been fair\n\nNo, actually, it isn't. I am not sure why you are saying that.\n\n> to our users in doing so as it has historically prevented features which our\n> users are demanding and end up getting from commercial providers until we\n> implement them ultimately anyway. This particular argument simply doesn’t seem\n> to actually hold the value that proponents of it claim, for us at least, and we\n> have clear counter-examples which we can point to and I hope we learn from\n> those.\n\nI don't think you are addressing actual issues above.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Mon, 27 Mar 2023 19:19:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Mar 27, 2023 at 19:19 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Mar 28, 2023 at 12:57:42AM +0200, Stephen Frost wrote:\n> > I consider the operating system and its processes as much more of a\n> > single entity than TLS over a network.\n> >\n> > This may be the case sometimes but there’s absolutely no shortage of\n> other\n> > cases and it’s almost more the rule these days, that there is some kind\n> of\n> > network between the OS processes and the storage- a SAN, an iSCSI\n> network, NFS,\n> > are all quite common.\n>\n> Yes, but consider that the database cluster is having to get its data\n> from that remote storage --- the remote storage is not an independent\n> entity that can be corrupted without the databaes server being\n> compromised. If everything in PGDATA was GCM-verified, it would be\n> secure, but because some parts are not, I don't think it would be.\n\n\nThe remote storage is certainly an independent system. Multi-mount LUNs are\nentirely possible in a SAN (and absolutely with NFS, or just the NFS server\nitself is compromised..), so while the attacker may not have any access to\nthe database server itself, they may have access to these other systems,\nand that’s not even considering in-transit attacks which are also\nabsolutely possible, especially with iSCSI or NFS.\n\nI don’t understand what is being claimed that the remote storage is “not an\nindependent system” based on my understanding of, eg, NFS. With NFS, a\ndirectory on the NFS server is exported and the client mounts that\ndirectory as NFS locally, all over a network which may or may not be\nsecured against manipulation. A user on the NFS server with root access is\nabsolutely able to access and modify files on the NFS server trivially,\neven if they have no access to the PG server. Would you explain what you\nmean?\n\nI do agree that the ideal case would be that we encrypt everything we can\n(not everything can be for various reasons, but we don’t actually need to\neither) in the PGDATA directory is encrypted and authenticated, just like\nit would be ideal if everything was checksum’d and isn’t today. We are\nprogressing in that direction thanks to efforts such as reworking the other\nsubsystems to used shared buffers and a consistent page format, but just\nlike with checksums we do not need to have the perfect solution for us to\nprovide a lot of value here- and our users know that as the same is true of\nthe unauthenticated encryption approaches being offered by the commercial\nsolutions.\n\n> > As specific examples, consider:\n> > >\n> > > An attack against the database system where the database server is\n> shut\n> > down,\n> > > or a backup, and the encryption key isn’t available on the system.\n> > >\n> > > The backup system itself, not running as the PG user (an option\n> supported\n> > by PG\n> > > and at least pgbackrest) being compromised, thus allowing for\n> injection\n> > of\n> > > changes into a backup or into a restore.\n> >\n> > I then question why we are not adding encryption to pg_basebackup or\n> > pgbackrest rather than the database system.\n> >\n> > Pgbackrest has encryption and authentication of it … but that doesn’t\n> actually\n> > address the attack vector that I outlined. If the backup user is\n> compromised\n> > then they can change the data before it gets to the storage. If the\n> backup\n> > user is compromised then they have access to whatever key is used to\n> encrypt\n> > and authenticate the backup and therefore can trivially manipulate the\n> data.\n>\n> So the idea is that the backup user can be compromised without the data\n> being vulnerable --- makes sense, though that use-case seems narrow.\n\n\nThat’s perhaps a fair consideration- but it’s clearly of enough value that\nmany of our users are asking for it and not using PG because we don’t have\nit today. Ultimately though, this clearly makes it more than a “checkbox”\nfeature. I hope we are able to agree on that now.\n\n> What were the _technical_ reasons for those objections?\n> >\n> > I believe largely the ones I’m bringing up here and which I outline\n> above… I\n> > don’t mean to pretend that any of this is of my own independent\n> construction. I\n> > don’t believe it is and my apologies if it came across that way.\n>\n> Yes, there is value beyond the check-box, but in most cases those\n> values are limited considering the complexity of the features, and the\n> check-box is what most people are asking for, I think.\n\n\nFor the users who ask on the lists for this feature, regularly, how many\ndon’t ask because they google or find prior responses on the list to the\nquestion of if we have this capability? How do we know that their cases\nare “checkbox”? Consider that there are standards groups which explicitly\nconsider these attack vectors and consider them important enough to require\nmitigations to address those vectors. Do the end users of PG understand the\nattack vectors or why they matter? Perhaps not, but just because they\ncan’t articulate the reasoning does NOT mean that the attack vector doesn’t\nexist or that their environment is somehow immune to it- indeed, as the\nstandards bodies surely know, the opposite is true- they’re almost\ncertainly at risk of those attack vectors and therefore the standards\nbodies are absolutely justified in requiring them to provide a solution.\nTreating these users as unimportant because they don’t have the depth of\nunderstanding that we do or that the standards body does is not helping\nthem- it’s actively driving them away from PG.\n\n> > I’ve grown weary of this argument as the other major piece of work\n> it was\n> > > routinely applied to was RLS and yet that has certainly been seen\n> broadly\n> > as a\n> > > beneficial feature with users clearly leveraging it and in more\n> than some\n> > > “checkbox” way.\n> >\n> > RLS has to overcome that objection, and I think it did, as was better\n> > for doing that.\n> >\n> > Beyond it being called a checkbox - what were the arguments against it?\n> I\n>\n> The RLS arguments were that queries could expoose some of the underlying\n> data, but in summary, that was considered acceptable.\n\n\nThis is an excellent point- and dovetails very nicely into my argument that\nprotecting primary data (what is provided by users and ends up in indexes\nand heaps) is valuable even if we don’t (yet..) have protection for other\nparts of the system. Reducing the size of the attack vector is absolutely\nuseful, especially when it’s such a large amount of the data in the system.\nYes, we should, and will, continue to improve- as we do with many features,\nbut we don’t need to wait for perfection to include this feature, just as\nwith RLS and numerous other features we have.\n\n> > We, as a community, are clearly losing value by lack of this\n> capability,\n> > if by\n> > > no other measure than simply the numerous users of the commercial\n> > > implementations feeling that they simply can’t use PG without this\n> > feature, for\n> > > whatever their reasoning.\n> >\n> > That is true, but I go back to my concern over useful feature vs.\n> check\n> > box.\n> >\n> > While it’s easy to label something as checkbox, I don’t feel we have\n> been fair\n>\n> No, actually, it isn't. I am not sure why you are saying that.\n\n\nI’m confused as to what is required to label a feature as a “checkbox”\nfeature then. What did you us to make that determination of this feature?\nI’m happy to be wrong here.\n\n> to our users in doing so as it has historically prevented features which\n> our\n> > users are demanding and end up getting from commercial providers until we\n> > implement them ultimately anyway. This particular argument simply\n> doesn’t seem\n> > to actually hold the value that proponents of it claim, for us at least,\n> and we\n> > have clear counter-examples which we can point to and I hope we learn\n> from\n> > those.\n>\n> I don't think you are addressing actual issues above.\n\n\nSpecifics would be really helpful. I don’t doubt that there are things I’m\nmissing, but I’ve tried to address each point raised clearly and concisely.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Mon, Mar 27, 2023 at 19:19 Bruce Momjian <bruce@momjian.us> wrote:On Tue, Mar 28, 2023 at 12:57:42AM +0200, Stephen Frost wrote:\n> I consider the operating system and its processes as much more of a\n> single entity than TLS over a network.\n> \n> This may be the case sometimes but there’s absolutely no shortage of other\n> cases and it’s almost more the rule these days, that there is some kind of\n> network between the OS processes and the storage- a SAN, an iSCSI network, NFS,\n> are all quite common.\n\nYes, but consider that the database cluster is having to get its data\nfrom that remote storage --- the remote storage is not an independent\nentity that can be corrupted without the databaes server being\ncompromised. If everything in PGDATA was GCM-verified, it would be\nsecure, but because some parts are not, I don't think it would be.The remote storage is certainly an independent system. Multi-mount LUNs are entirely possible in a SAN (and absolutely with NFS, or just the NFS server itself is compromised..), so while the attacker may not have any access to the database server itself, they may have access to these other systems, and that’s not even considering in-transit attacks which are also absolutely possible, especially with iSCSI or NFS. I don’t understand what is being claimed that the remote storage is “not an independent system” based on my understanding of, eg, NFS. With NFS, a directory on the NFS server is exported and the client mounts that directory as NFS locally, all over a network which may or may not be secured against manipulation. A user on the NFS server with root access is absolutely able to access and modify files on the NFS server trivially, even if they have no access to the PG server. Would you explain what you mean?I do agree that the ideal case would be that we encrypt everything we can (not everything can be for various reasons, but we don’t actually need to either) in the PGDATA directory is encrypted and authenticated, just like it would be ideal if everything was checksum’d and isn’t today. We are progressing in that direction thanks to efforts such as reworking the other subsystems to used shared buffers and a consistent page format, but just like with checksums we do not need to have the perfect solution for us to provide a lot of value here- and our users know that as the same is true of the unauthenticated encryption approaches being offered by the commercial solutions.\n> > As specific examples, consider:\n> >\n> > An attack against the database system where the database server is shut\n> down,\n> > or a backup, and the encryption key isn’t available on the system.\n> >\n> > The backup system itself, not running as the PG user (an option supported\n> by PG\n> > and at least pgbackrest) being compromised, thus allowing for injection\n> of\n> > changes into a backup or into a restore.\n> \n> I then question why we are not adding encryption to pg_basebackup or\n> pgbackrest rather than the database system.\n> \n> Pgbackrest has encryption and authentication of it … but that doesn’t actually\n> address the attack vector that I outlined. If the backup user is compromised\n> then they can change the data before it gets to the storage. If the backup\n> user is compromised then they have access to whatever key is used to encrypt\n> and authenticate the backup and therefore can trivially manipulate the data.\n\nSo the idea is that the backup user can be compromised without the data\nbeing vulnerable --- makes sense, though that use-case seems narrow.That’s perhaps a fair consideration- but it’s clearly of enough value that many of our users are asking for it and not using PG because we don’t have it today. Ultimately though, this clearly makes it more than a “checkbox” feature. I hope we are able to agree on that now.\n> What were the _technical_ reasons for those objections?\n> \n> I believe largely the ones I’m bringing up here and which I outline above… I\n> don’t mean to pretend that any of this is of my own independent construction. I\n> don’t believe it is and my apologies if it came across that way.\n\nYes, there is value beyond the check-box, but in most cases those\nvalues are limited considering the complexity of the features, and the\ncheck-box is what most people are asking for, I think.For the users who ask on the lists for this feature, regularly, how many don’t ask because they google or find prior responses on the list to the question of if we have this capability? How do we know that their cases are “checkbox”? Consider that there are standards groups which explicitly consider these attack vectors and consider them important enough to require mitigations to address those vectors. Do the end users of PG understand the attack vectors or why they matter? Perhaps not, but just because they can’t articulate the reasoning does NOT mean that the attack vector doesn’t exist or that their environment is somehow immune to it- indeed, as the standards bodies surely know, the opposite is true- they’re almost certainly at risk of those attack vectors and therefore the standards bodies are absolutely justified in requiring them to provide a solution. Treating these users as unimportant because they don’t have the depth of understanding that we do or that the standards body does is not helping them- it’s actively driving them away from PG. \n> > I’ve grown weary of this argument as the other major piece of work it was\n> > routinely applied to was RLS and yet that has certainly been seen broadly\n> as a\n> > beneficial feature with users clearly leveraging it and in more than some\n> > “checkbox” way.\n> \n> RLS has to overcome that objection, and I think it did, as was better\n> for doing that.\n> \n> Beyond it being called a checkbox - what were the arguments against it? I\n\nThe RLS arguments were that queries could expoose some of the underlying\ndata, but in summary, that was considered acceptable.This is an excellent point- and dovetails very nicely into my argument that protecting primary data (what is provided by users and ends up in indexes and heaps) is valuable even if we don’t (yet..) have protection for other parts of the system. Reducing the size of the attack vector is absolutely useful, especially when it’s such a large amount of the data in the system. Yes, we should, and will, continue to improve- as we do with many features, but we don’t need to wait for perfection to include this feature, just as with RLS and numerous other features we have. \n> > We, as a community, are clearly losing value by lack of this capability,\n> if by\n> > no other measure than simply the numerous users of the commercial\n> > implementations feeling that they simply can’t use PG without this\n> feature, for\n> > whatever their reasoning.\n> \n> That is true, but I go back to my concern over useful feature vs. check\n> box.\n> \n> While it’s easy to label something as checkbox, I don’t feel we have been fair\n\nNo, actually, it isn't. I am not sure why you are saying that.I’m confused as to what is required to label a feature as a “checkbox” feature then. What did you us to make that determination of this feature? I’m happy to be wrong here. \n> to our users in doing so as it has historically prevented features which our\n> users are demanding and end up getting from commercial providers until we\n> implement them ultimately anyway. This particular argument simply doesn’t seem\n> to actually hold the value that proponents of it claim, for us at least, and we\n> have clear counter-examples which we can point to and I hope we learn from\n> those.\n\nI don't think you are addressing actual issues above.Specifics would be really helpful. I don’t doubt that there are things I’m missing, but I’ve tried to address each point raised clearly and concisely.Thanks!Stephen",
"msg_date": "Tue, 28 Mar 2023 02:03:50 +0200",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 02:03:50AM +0200, Stephen Frost wrote:\n> The remote storage is certainly an independent system. Multi-mount LUNs are\n> entirely possible in a SAN (and absolutely with NFS, or just the NFS server\n> itself is compromised..), so while the attacker may not have any access to the\n> database server itself, they may have access to these other systems, and that’s\n> not even considering in-transit attacks which are also absolutely possible,\n> especially with iSCSI or NFS. \n> \n> I don’t understand what is being claimed that the remote storage is “not an\n> independent system” based on my understanding of, eg, NFS. With NFS, a\n> directory on the NFS server is exported and the client mounts that directory as\n> NFS locally, all over a network which may or may not be secured against\n> manipulation. A user on the NFS server with root access is absolutely able to\n> access and modify files on the NFS server trivially, even if they have no\n> access to the PG server. Would you explain what you mean?\n\nThe point is that someone could change values in the storage, pg_xact,\nencryption settings, binaries, that would allow the attacker to learn\nthe encryption key. This is not possible for two secure endpoints and\nsomeone changing data in transit. Yeah, it took me a while to\nunderstand these boundaries too.\n\n> So the idea is that the backup user can be compromised without the data\n> being vulnerable --- makes sense, though that use-case seems narrow.\n> \n> That’s perhaps a fair consideration- but it’s clearly of enough value that many\n> of our users are asking for it and not using PG because we don’t have it today.\n> Ultimately though, this clearly makes it more than a “checkbox” feature. I hope\n> we are able to agree on that now.\n\nIt is more than a check box feature, yes, but I am guessing few people\nare wanting the this for the actual features beyond check box.\n\n> Yes, there is value beyond the check-box, but in most cases those\n> values are limited considering the complexity of the features, and the\n> check-box is what most people are asking for, I think.\n> \n> For the users who ask on the lists for this feature, regularly, how many don’t\n> ask because they google or find prior responses on the list to the question of\n> if we have this capability? How do we know that their cases are “checkbox”? \n\nBecause I have rarely heard people articulate the value beyond check\nbox.\n\n> Consider that there are standards groups which explicitly consider these attack\n> vectors and consider them important enough to require mitigations to address\n> those vectors. Do the end users of PG understand the attack vectors or why they\n> matter? Perhaps not, but just because they can’t articulate the reasoning does\n> NOT mean that the attack vector doesn’t exist or that their environment is\n> somehow immune to it- indeed, as the standards bodies surely know, the opposite\n> is true- they’re almost certainly at risk of those attack vectors and therefore\n> the standards bodies are absolutely justified in requiring them to provide a\n> solution. Treating these users as unimportant because they don’t have the depth\n> of understanding that we do or that the standards body does is not helping\n> them- it’s actively driving them away from PG. \n\nWell, then who is going to explain them here, because I have not heard\nthem yet.\n\n> The RLS arguments were that queries could expoose some of the underlying\n> data, but in summary, that was considered acceptable.\n> \n> This is an excellent point- and dovetails very nicely into my argument that\n> protecting primary data (what is provided by users and ends up in indexes and\n> heaps) is valuable even if we don’t (yet..) have protection for other parts of\n> the system. Reducing the size of the attack vector is absolutely useful,\n> especially when it’s such a large amount of the data in the system. Yes, we\n> should, and will, continue to improve- as we do with many features, but we\n> don’t need to wait for perfection to include this feature, just as with RLS and\n> numerous other features we have. \n\nThe issue is that you needed a certain type of user with a certain type\nof access to break RLS, while for this, writing to PGDATA is the simple\ncase for all the breakage, and the thing we are protecting with\nauthentication.\n\n> > > We, as a community, are clearly losing value by lack of this\n> capability,\n> > if by\n> > > no other measure than simply the numerous users of the commercial\n> > > implementations feeling that they simply can’t use PG without this\n> > feature, for\n> > > whatever their reasoning.\n> >\n> > That is true, but I go back to my concern over useful feature vs.\n> check\n> > box.\n> >\n> > While it’s easy to label something as checkbox, I don’t feel we have been\n> fair\n> \n> No, actually, it isn't. I am not sure why you are saying that.\n> \n> I’m confused as to what is required to label a feature as a “checkbox” feature\n> then. What did you us to make that determination of this feature? I’m happy to\n> be wrong here. \n\nI don't see the point in me continuing to reply here. You just seem to\ncontinue asking questions without actually thinking of what I am saying,\nand hope I get tired or something.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Mon, 27 Mar 2023 21:35:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Mar 27, 2023 at 21:35 Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Mar 28, 2023 at 02:03:50AM +0200, Stephen Frost wrote:\n> > The remote storage is certainly an independent system. Multi-mount LUNs\n> are\n> > entirely possible in a SAN (and absolutely with NFS, or just the NFS\n> server\n> > itself is compromised..), so while the attacker may not have any access\n> to the\n> > database server itself, they may have access to these other systems, and\n> that’s\n> > not even considering in-transit attacks which are also absolutely\n> possible,\n> > especially with iSCSI or NFS.\n> >\n> > I don’t understand what is being claimed that the remote storage is “not\n> an\n> > independent system” based on my understanding of, eg, NFS. With NFS, a\n> > directory on the NFS server is exported and the client mounts that\n> directory as\n> > NFS locally, all over a network which may or may not be secured against\n> > manipulation. A user on the NFS server with root access is absolutely\n> able to\n> > access and modify files on the NFS server trivially, even if they have no\n> > access to the PG server. Would you explain what you mean?\n>\n> The point is that someone could change values in the storage, pg_xact,\n> encryption settings, binaries, that would allow the attacker to learn\n> the encryption key. This is not possible for two secure endpoints and\n> someone changing data in transit. Yeah, it took me a while to\n> understand these boundaries too.\n\n\nThis depends on the specific configuration of the systems, clearly. Being\nable to change values in other parts of the system isn’t great and we\nshould work to improve on that, but clearly that isn’t so much of an issue\nthat people aren’t willing to accept a partial solution or existing\ncommercial solutions wouldn’t be accepted or considered viable. Indeed,\nusing GCM is objectively an improvement over what’s being offered commonly\ntoday.\n\nI also generally object to the idea that being able to manipulate the\nPGDATA directory necessarily means being able to gain access to the KEK. In\ntrivial solutions, sure, it’s possible, but the NFS server should never be\nasking some external KMS for the key to a given DB server and a reasonable\nimplementation won’t allow this, and instead would flag and log such an\nattempt for someone to review, leading to a much faster realization of a\ncompromised system.\n\nCertainly it’s much simpler to reason about an attacker with no knowledge\nof either system and only network access to see if they can penetrate the\ncommunications between the two end-points, but that is not the only case\nwhere authenticated encryption is useful.\n\n> So the idea is that the backup user can be compromised without the\n> data\n> > being vulnerable --- makes sense, though that use-case seems narrow.\n> >\n> > That’s perhaps a fair consideration- but it’s clearly of enough value\n> that many\n> > of our users are asking for it and not using PG because we don’t have it\n> today.\n> > Ultimately though, this clearly makes it more than a “checkbox” feature.\n> I hope\n> > we are able to agree on that now.\n>\n> It is more than a check box feature, yes, but I am guessing few people\n> are wanting the this for the actual features beyond check box.\n\n\nAs I explained previously, perhaps the people asking are doing so for only\nthe “checkbox”, but that doesn’t mean it isn’t a useful feature or that it\nisn’t valuable in its own right. Those checklists were compiled and\nenforced for a reason, which the end users might not understand but is\nstill absolutely valuable. Sad to say, but frankly this is becoming more\nand more common but we shouldn’t be faulting the users asking for it- if it\nwere truly useless then eventually it would be removed from the standard,\nbut it hasn’t and it won’t be because, while not every end user has a depth\nof understanding to explain it, it is actually a useful and important\ncapability to have and one that is important to implement.\n\n> Yes, there is value beyond the check-box, but in most cases those\n> > values are limited considering the complexity of the features, and\n> the\n> > check-box is what most people are asking for, I think.\n> >\n> > For the users who ask on the lists for this feature, regularly, how many\n> don’t\n> > ask because they google or find prior responses on the list to the\n> question of\n> > if we have this capability? How do we know that their cases are\n> “checkbox”?\n>\n> Because I have rarely heard people articulate the value beyond check\n> box.\n\n\nHave I done so sufficiently then that we can agree that calling it\n“checkbox” is inappropriate and detrimental to our user base?\n\n> Consider that there are standards groups which explicitly consider these\n> attack\n> > vectors and consider them important enough to require mitigations to\n> address\n> > those vectors. Do the end users of PG understand the attack vectors or\n> why they\n> > matter? Perhaps not, but just because they can’t articulate the\n> reasoning does\n> > NOT mean that the attack vector doesn’t exist or that their environment\n> is\n> > somehow immune to it- indeed, as the standards bodies surely know, the\n> opposite\n> > is true- they’re almost certainly at risk of those attack vectors and\n> therefore\n> > the standards bodies are absolutely justified in requiring them to\n> provide a\n> > solution. Treating these users as unimportant because they don’t have\n> the depth\n> > of understanding that we do or that the standards body does is not\n> helping\n> > them- it’s actively driving them away from PG.\n>\n> Well, then who is going to explain them here, because I have not heard\n> them yet.\n\n\nI thought I was doing so.\n\n> The RLS arguments were that queries could expoose some of the\n> underlying\n> > data, but in summary, that was considered acceptable.\n> >\n> > This is an excellent point- and dovetails very nicely into my argument\n> that\n> > protecting primary data (what is provided by users and ends up in\n> indexes and\n> > heaps) is valuable even if we don’t (yet..) have protection for other\n> parts of\n> > the system. Reducing the size of the attack vector is absolutely useful,\n> > especially when it’s such a large amount of the data in the system. Yes,\n> we\n> > should, and will, continue to improve- as we do with many features, but\n> we\n> > don’t need to wait for perfection to include this feature, just as with\n> RLS and\n> > numerous other features we have.\n>\n> The issue is that you needed a certain type of user with a certain type\n> of access to break RLS, while for this, writing to PGDATA is the simple\n> case for all the breakage, and the thing we are protecting with\n> authentication.\n\n\nThis goes back to the “if it isn’t perfect then it’s useless” argument …\nbut that’s exactly the discussion which was had around RLS and ultimately\nwe decided that RLS was still useful even with the leaks- and our users\naccepted that also and have benefitted from it ever since it was included\nin core. The same exists here- yes, more needs to be done than the absolute\nsimplest “make install” to have the system be secure (not unlike today with\nour defaults from a source build with “make install”..) but at least with\nthis capability included it’s possible, and we can write “securing\nPostgreSQL” documentation on how to, whereas without it there is simply no\nway to address the attack vectors I’ve articulated here.\n\n> > > We, as a community, are clearly losing value by lack of this\n> > capability,\n> > > if by\n> > > > no other measure than simply the numerous users of the\n> commercial\n> > > > implementations feeling that they simply can’t use PG\n> without this\n> > > feature, for\n> > > > whatever their reasoning.\n> > >\n> > > That is true, but I go back to my concern over useful feature\n> vs.\n> > check\n> > > box.\n> > >\n> > > While it’s easy to label something as checkbox, I don’t feel we\n> have been\n> > fair\n> >\n> > No, actually, it isn't. I am not sure why you are saying that.\n> >\n> > I’m confused as to what is required to label a feature as a “checkbox”\n> feature\n> > then. What did you us to make that determination of this feature? I’m\n> happy to\n> > be wrong here.\n>\n> I don't see the point in me continuing to reply here. You just seem to\n> continue asking questions without actually thinking of what I am saying,\n> and hope I get tired or something.\n\n\nI hope we have others who have a moment to chime in here and provide their\nviewpoints as I don’t feel this is an accurate representation of the\ndiscussion thus far.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Mar 27, 2023 at 21:35 Bruce Momjian <bruce@momjian.us> wrote:On Tue, Mar 28, 2023 at 02:03:50AM +0200, Stephen Frost wrote:\n> The remote storage is certainly an independent system. Multi-mount LUNs are\n> entirely possible in a SAN (and absolutely with NFS, or just the NFS server\n> itself is compromised..), so while the attacker may not have any access to the\n> database server itself, they may have access to these other systems, and that’s\n> not even considering in-transit attacks which are also absolutely possible,\n> especially with iSCSI or NFS. \n> \n> I don’t understand what is being claimed that the remote storage is “not an\n> independent system” based on my understanding of, eg, NFS. With NFS, a\n> directory on the NFS server is exported and the client mounts that directory as\n> NFS locally, all over a network which may or may not be secured against\n> manipulation. A user on the NFS server with root access is absolutely able to\n> access and modify files on the NFS server trivially, even if they have no\n> access to the PG server. Would you explain what you mean?\n\nThe point is that someone could change values in the storage, pg_xact,\nencryption settings, binaries, that would allow the attacker to learn\nthe encryption key. This is not possible for two secure endpoints and\nsomeone changing data in transit. Yeah, it took me a while to\nunderstand these boundaries too.This depends on the specific configuration of the systems, clearly. Being able to change values in other parts of the system isn’t great and we should work to improve on that, but clearly that isn’t so much of an issue that people aren’t willing to accept a partial solution or existing commercial solutions wouldn’t be accepted or considered viable. Indeed, using GCM is objectively an improvement over what’s being offered commonly today.I also generally object to the idea that being able to manipulate the PGDATA directory necessarily means being able to gain access to the KEK. In trivial solutions, sure, it’s possible, but the NFS server should never be asking some external KMS for the key to a given DB server and a reasonable implementation won’t allow this, and instead would flag and log such an attempt for someone to review, leading to a much faster realization of a compromised system.Certainly it’s much simpler to reason about an attacker with no knowledge of either system and only network access to see if they can penetrate the communications between the two end-points, but that is not the only case where authenticated encryption is useful.\n> So the idea is that the backup user can be compromised without the data\n> being vulnerable --- makes sense, though that use-case seems narrow.\n> \n> That’s perhaps a fair consideration- but it’s clearly of enough value that many\n> of our users are asking for it and not using PG because we don’t have it today.\n> Ultimately though, this clearly makes it more than a “checkbox” feature. I hope\n> we are able to agree on that now.\n\nIt is more than a check box feature, yes, but I am guessing few people\nare wanting the this for the actual features beyond check box.As I explained previously, perhaps the people asking are doing so for only the “checkbox”, but that doesn’t mean it isn’t a useful feature or that it isn’t valuable in its own right. Those checklists were compiled and enforced for a reason, which the end users might not understand but is still absolutely valuable. Sad to say, but frankly this is becoming more and more common but we shouldn’t be faulting the users asking for it- if it were truly useless then eventually it would be removed from the standard, but it hasn’t and it won’t be because, while not every end user has a depth of understanding to explain it, it is actually a useful and important capability to have and one that is important to implement.\n> Yes, there is value beyond the check-box, but in most cases those\n> values are limited considering the complexity of the features, and the\n> check-box is what most people are asking for, I think.\n> \n> For the users who ask on the lists for this feature, regularly, how many don’t\n> ask because they google or find prior responses on the list to the question of\n> if we have this capability? How do we know that their cases are “checkbox”? \n\nBecause I have rarely heard people articulate the value beyond check\nbox.Have I done so sufficiently then that we can agree that calling it “checkbox” is inappropriate and detrimental to our user base?\n> Consider that there are standards groups which explicitly consider these attack\n> vectors and consider them important enough to require mitigations to address\n> those vectors. Do the end users of PG understand the attack vectors or why they\n> matter? Perhaps not, but just because they can’t articulate the reasoning does\n> NOT mean that the attack vector doesn’t exist or that their environment is\n> somehow immune to it- indeed, as the standards bodies surely know, the opposite\n> is true- they’re almost certainly at risk of those attack vectors and therefore\n> the standards bodies are absolutely justified in requiring them to provide a\n> solution. Treating these users as unimportant because they don’t have the depth\n> of understanding that we do or that the standards body does is not helping\n> them- it’s actively driving them away from PG. \n\nWell, then who is going to explain them here, because I have not heard\nthem yet.I thought I was doing so.\n> The RLS arguments were that queries could expoose some of the underlying\n> data, but in summary, that was considered acceptable.\n> \n> This is an excellent point- and dovetails very nicely into my argument that\n> protecting primary data (what is provided by users and ends up in indexes and\n> heaps) is valuable even if we don’t (yet..) have protection for other parts of\n> the system. Reducing the size of the attack vector is absolutely useful,\n> especially when it’s such a large amount of the data in the system. Yes, we\n> should, and will, continue to improve- as we do with many features, but we\n> don’t need to wait for perfection to include this feature, just as with RLS and\n> numerous other features we have. \n\nThe issue is that you needed a certain type of user with a certain type\nof access to break RLS, while for this, writing to PGDATA is the simple\ncase for all the breakage, and the thing we are protecting with\nauthentication.This goes back to the “if it isn’t perfect then it’s useless” argument … but that’s exactly the discussion which was had around RLS and ultimately we decided that RLS was still useful even with the leaks- and our users accepted that also and have benefitted from it ever since it was included in core. The same exists here- yes, more needs to be done than the absolute simplest “make install” to have the system be secure (not unlike today with our defaults from a source build with “make install”..) but at least with this capability included it’s possible, and we can write “securing PostgreSQL” documentation on how to, whereas without it there is simply no way to address the attack vectors I’ve articulated here. \n> > > We, as a community, are clearly losing value by lack of this\n> capability,\n> > if by\n> > > no other measure than simply the numerous users of the commercial\n> > > implementations feeling that they simply can’t use PG without this\n> > feature, for\n> > > whatever their reasoning.\n> >\n> > That is true, but I go back to my concern over useful feature vs.\n> check\n> > box.\n> >\n> > While it’s easy to label something as checkbox, I don’t feel we have been\n> fair\n> \n> No, actually, it isn't. I am not sure why you are saying that.\n> \n> I’m confused as to what is required to label a feature as a “checkbox” feature\n> then. What did you us to make that determination of this feature? I’m happy to\n> be wrong here. \n\nI don't see the point in me continuing to reply here. You just seem to\ncontinue asking questions without actually thinking of what I am saying,\nand hope I get tired or something.I hope we have others who have a moment to chime in here and provide their viewpoints as I don’t feel this is an accurate representation of the discussion thus far. Thanks,Stephen",
"msg_date": "Mon, 27 Mar 2023 22:56:58 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 5:02 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> > There's clearly user demand for it as there's a number of organizations\n>> > who have forks which are providing it in one shape or another. This\n>> > kind of splintering of the community is actually an actively bad thing\n>> > for the project and is part of what killed Unix, by at least some pretty\n>> > reputable accounts, in my view.\n>>\n>> Yes, the number of commercial implementations of this is a concern. Of\n>> course, it is also possible that those commercial implementations are\n>> meeting checkbox requirements rather than technical ones, and the\n>> community has been hostile to check box-only features.\n>\n>\n> I’ve grown weary of this argument as the other major piece of work it was\n> routinely applied to was RLS and yet that has certainly been seen broadly\n> as a beneficial feature with users clearly leveraging it and in more than\n> some “checkbox” way.\n>\n> Indeed, it’s similar also in that commercial implementations were done of\n> RLS while there were arguments made about it being a checkbox feature which\n> were used to discourage it from being implemented in core. Were it truly\n> checkbox, I don’t feel we would have the regular and ongoing discussion\n> about it on the lists that we do, nor see other tools built on top of PG\n> which specifically leverage it. Perhaps there are truly checkbox features\n> out there which we will never implement, but I’m (perhaps due to what my\n> dad would call selective listening on my part, perhaps not) having trouble\n> coming up with any presently. Features that exist in other systems that we\n> don’t want? Certainly. We don’t characterize those as simply “checkbox”\n> though. Perhaps that’s in part because we provide alternatives- but that’s\n> not the case here. We have no comparable way to have this capability as\n> part of the core system.\n>\n> We, as a community, are clearly losing value by lack of this capability,\n> if by no other measure than simply the numerous users of the commercial\n> implementations feeling that they simply can’t use PG without this feature,\n> for whatever their reasoning.\n>\n\nI also think this is something of a problem because very few requirements\nare actually purely technical requirements, and I think the issue is that\nin many cases there are ways around the lack of the feature.\n\nSo I would phrase this differently. What is the value of doing this in\ncore?\n\nThis dramatically simplifies the question of setting up a PostgreSQL\nenvironment that is properly protected with encryption at rest. That in\nitself is valuable. Today you can accomplish something similar with\nencrypted filesystems and encryption options in things like pgbackrest.\nHowever these are many different pieces of a solution and missing up the\nsetup of any one of them can compromise the data. Having a single point of\nencryption and decryption means fewer opportunities to mess it up and that\nmeans less risk. This in turn makes it easier to settle on using\nPostgreSQL.\n\nThere are certainly going to be those who approach encryption at rest as a\ncheckbox item and who don't really care if there are holes in it. But\nthere are others who really should be concerned (and this is becoming a\nbigger issue where data privacy, PCI-DSS, and other requirements may come\ninto play), and those need better tooling than we have. I also think that\nas data privacy becomes a larger issue, this will become a larger topic.\n\n Anyway, my contribution to that question.\n\nBest Wishes,\nChris Travers\n\n>\n> Thanks,\n>\n> Stephen\n>\n\n\n-- \nBest Wishes,\nChris Travers\n\nEfficito: Hosted Accounting and ERP. Robust and Flexible. No vendor\nlock-in.\nhttp://www.efficito.com/learn_more\n\nOn Tue, Mar 28, 2023 at 5:02 AM Stephen Frost <sfrost@snowman.net> wrote:\n> There's clearly user demand for it as there's a number of organizations\n> who have forks which are providing it in one shape or another. This\n> kind of splintering of the community is actually an actively bad thing\n> for the project and is part of what killed Unix, by at least some pretty\n> reputable accounts, in my view.\n\nYes, the number of commercial implementations of this is a concern. Of\ncourse, it is also possible that those commercial implementations are\nmeeting checkbox requirements rather than technical ones, and the\ncommunity has been hostile to check box-only features.I’ve grown weary of this argument as the other major piece of work it was routinely applied to was RLS and yet that has certainly been seen broadly as a beneficial feature with users clearly leveraging it and in more than some “checkbox” way.Indeed, it’s similar also in that commercial implementations were done of RLS while there were arguments made about it being a checkbox feature which were used to discourage it from being implemented in core. Were it truly checkbox, I don’t feel we would have the regular and ongoing discussion about it on the lists that we do, nor see other tools built on top of PG which specifically leverage it. Perhaps there are truly checkbox features out there which we will never implement, but I’m (perhaps due to what my dad would call selective listening on my part, perhaps not) having trouble coming up with any presently. Features that exist in other systems that we don’t want? Certainly. We don’t characterize those as simply “checkbox” though. Perhaps that’s in part because we provide alternatives- but that’s not the case here. We have no comparable way to have this capability as part of the core system.We, as a community, are clearly losing value by lack of this capability, if by no other measure than simply the numerous users of the commercial implementations feeling that they simply can’t use PG without this feature, for whatever their reasoning.I also think this is something of a problem because very few requirements are actually purely technical requirements, and I think the issue is that in many cases there are ways around the lack of the feature. So I would phrase this differently. What is the value of doing this in core?This dramatically simplifies the question of setting up a PostgreSQL environment that is properly protected with encryption at rest. That in itself is valuable. Today you can accomplish something similar with encrypted filesystems and encryption options in things like pgbackrest. However these are many different pieces of a solution and missing up the setup of any one of them can compromise the data. Having a single point of encryption and decryption means fewer opportunities to mess it up and that means less risk. This in turn makes it easier to settle on using PostgreSQL.There are certainly going to be those who approach encryption at rest as a checkbox item and who don't really care if there are holes in it. But there are others who really should be concerned (and this is becoming a bigger issue where data privacy, PCI-DSS, and other requirements may come into play), and those need better tooling than we have. I also think that as data privacy becomes a larger issue, this will become a larger topic. Anyway, my contribution to that question.Best Wishes,Chris TraversThanks,Stephen\n-- Best Wishes,Chris TraversEfficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.http://www.efficito.com/learn_more",
"msg_date": "Tue, 28 Mar 2023 14:28:40 +0700",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 8:35 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Mar 28, 2023 at 02:03:50AM +0200, Stephen Frost wrote:\n> > The remote storage is certainly an independent system. Multi-mount LUNs\n> are\n> > entirely possible in a SAN (and absolutely with NFS, or just the NFS\n> server\n> > itself is compromised..), so while the attacker may not have any access\n> to the\n> > database server itself, they may have access to these other systems, and\n> that’s\n> > not even considering in-transit attacks which are also absolutely\n> possible,\n> > especially with iSCSI or NFS.\n> >\n> > I don’t understand what is being claimed that the remote storage is “not\n> an\n> > independent system” based on my understanding of, eg, NFS. With NFS, a\n> > directory on the NFS server is exported and the client mounts that\n> directory as\n> > NFS locally, all over a network which may or may not be secured against\n> > manipulation. A user on the NFS server with root access is absolutely\n> able to\n> > access and modify files on the NFS server trivially, even if they have no\n> > access to the PG server. Would you explain what you mean?\n>\n> The point is that someone could change values in the storage, pg_xact,\n> encryption settings, binaries, that would allow the attacker to learn\n> the encryption key. This is not possible for two secure endpoints and\n> someone changing data in transit. Yeah, it took me a while to\n> understand these boundaries too.\n>\n> > So the idea is that the backup user can be compromised without the\n> data\n> > being vulnerable --- makes sense, though that use-case seems narrow.\n> >\n> > That’s perhaps a fair consideration- but it’s clearly of enough value\n> that many\n> > of our users are asking for it and not using PG because we don’t have it\n> today.\n> > Ultimately though, this clearly makes it more than a “checkbox” feature.\n> I hope\n> > we are able to agree on that now.\n>\n> It is more than a check box feature, yes, but I am guessing few people\n> are wanting the this for the actual features beyond check box.\n>\n> > Yes, there is value beyond the check-box, but in most cases those\n> > values are limited considering the complexity of the features, and\n> the\n> > check-box is what most people are asking for, I think.\n> >\n> > For the users who ask on the lists for this feature, regularly, how many\n> don’t\n> > ask because they google or find prior responses on the list to the\n> question of\n> > if we have this capability? How do we know that their cases are\n> “checkbox”?\n>\n> Because I have rarely heard people articulate the value beyond check\n> box.\n>\n\nI think there is value. I am going to try to articulate a case for this\nhere.\n\nThe first is that if people just want a \"checkbox\" then they can implement\nPostgreSQL in ways that have encryption at rest today. This includes using\nLUKS and the encryption options in pgbackrest. That's good enough for a\ncheckbox. It isn't good enough for a real, secured instance however.\n\nThere are a few problems with trying to do this for a secured instance.\nThe first is that you have multiple links in the encryption chain, and the\nfailure of any one of them ill lead to cleartext exposure of data files.\nThis is not a problem for those who just want to tick a checkbox. Also the\nfact that backups and main systems are separately encrypted there (if the\nbackups are encrypted at all) means that people have to choose between\ncomplicating a restore process and simply ditching encryption on the\nbackup, which makes the checkbox somewhat pointless.\n\nWhere I have usually seen this come up is in the question of \"how do you\nprevent the problem of someone pulling storage devices from your servers\nand taking them away to compromise your data?\" Physical security comes\ninto it but often times people want more than that as an answer. I saw\nquestions like that from external auditors when I was at Adjust.\n\nIf you want to actually address that problem, then the current tooling is\nquite cumbersome. Yes you can do it, but it is very hard to make sure it\nhas been fully secured and also very hard to monitor. TDE would make the\nsetup and verification of this much easier. And in particular it solves a\nnumber of other issues that I can see arising from LUKS and similar\napproaches since it doesn't rely on the kernel to be able to translate\nplain text to and from cypher text.\n\nI have actually worked with folks who have PII and need to protect it and\nwho currently use LUKS and pg_backrest to do so. I would be extremely\nhappy to see TDE replace those for their needs. I can imagine that those\nwho hold high value data would use it as well instead of these other more\nerror prone and less secure setups.\n\n\n>\n> > Consider that there are standards groups which explicitly consider these\n> attack\n> > vectors and consider them important enough to require mitigations to\n> address\n> > those vectors. Do the end users of PG understand the attack vectors or\n> why they\n> > matter? Perhaps not, but just because they can’t articulate the\n> reasoning does\n> > NOT mean that the attack vector doesn’t exist or that their environment\n> is\n> > somehow immune to it- indeed, as the standards bodies surely know, the\n> opposite\n> > is true- they’re almost certainly at risk of those attack vectors and\n> therefore\n> > the standards bodies are absolutely justified in requiring them to\n> provide a\n> > solution. Treating these users as unimportant because they don’t have\n> the depth\n> > of understanding that we do or that the standards body does is not\n> helping\n> > them- it’s actively driving them away from PG.\n>\n> Well, then who is going to explain them here, because I have not heard\n> them yet.\n>\n> > The RLS arguments were that queries could expoose some of the\n> underlying\n> > data, but in summary, that was considered acceptable.\n> >\n> > This is an excellent point- and dovetails very nicely into my argument\n> that\n> > protecting primary data (what is provided by users and ends up in\n> indexes and\n> > heaps) is valuable even if we don’t (yet..) have protection for other\n> parts of\n> > the system. Reducing the size of the attack vector is absolutely useful,\n> > especially when it’s such a large amount of the data in the system. Yes,\n> we\n> > should, and will, continue to improve- as we do with many features, but\n> we\n> > don’t need to wait for perfection to include this feature, just as with\n> RLS and\n> > numerous other features we have.\n>\n> The issue is that you needed a certain type of user with a certain type\n> of access to break RLS, while for this, writing to PGDATA is the simple\n> case for all the breakage, and the thing we are protecting with\n> authentication.\n>\n> > > > We, as a community, are clearly losing value by lack of this\n> > capability,\n> > > if by\n> > > > no other measure than simply the numerous users of the\n> commercial\n> > > > implementations feeling that they simply can’t use PG\n> without this\n> > > feature, for\n> > > > whatever their reasoning.\n> > >\n> > > That is true, but I go back to my concern over useful feature\n> vs.\n> > check\n> > > box.\n> > >\n> > > While it’s easy to label something as checkbox, I don’t feel we\n> have been\n> > fair\n> >\n> > No, actually, it isn't. I am not sure why you are saying that.\n> >\n> > I’m confused as to what is required to label a feature as a “checkbox”\n> feature\n> > then. What did you us to make that determination of this feature? I’m\n> happy to\n> > be wrong here.\n>\n> I don't see the point in me continuing to reply here. You just seem to\n> continue asking questions without actually thinking of what I am saying,\n> and hope I get tired or something.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n>\n\n\n-- \nBest Wishes,\nChris Travers\n\nEfficito: Hosted Accounting and ERP. Robust and Flexible. No vendor\nlock-in.\nhttp://www.efficito.com/learn_more\n\nOn Tue, Mar 28, 2023 at 8:35 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Mar 28, 2023 at 02:03:50AM +0200, Stephen Frost wrote:\n> The remote storage is certainly an independent system. Multi-mount LUNs are\n> entirely possible in a SAN (and absolutely with NFS, or just the NFS server\n> itself is compromised..), so while the attacker may not have any access to the\n> database server itself, they may have access to these other systems, and that’s\n> not even considering in-transit attacks which are also absolutely possible,\n> especially with iSCSI or NFS. \n> \n> I don’t understand what is being claimed that the remote storage is “not an\n> independent system” based on my understanding of, eg, NFS. With NFS, a\n> directory on the NFS server is exported and the client mounts that directory as\n> NFS locally, all over a network which may or may not be secured against\n> manipulation. A user on the NFS server with root access is absolutely able to\n> access and modify files on the NFS server trivially, even if they have no\n> access to the PG server. Would you explain what you mean?\n\nThe point is that someone could change values in the storage, pg_xact,\nencryption settings, binaries, that would allow the attacker to learn\nthe encryption key. This is not possible for two secure endpoints and\nsomeone changing data in transit. Yeah, it took me a while to\nunderstand these boundaries too.\n\n> So the idea is that the backup user can be compromised without the data\n> being vulnerable --- makes sense, though that use-case seems narrow.\n> \n> That’s perhaps a fair consideration- but it’s clearly of enough value that many\n> of our users are asking for it and not using PG because we don’t have it today.\n> Ultimately though, this clearly makes it more than a “checkbox” feature. I hope\n> we are able to agree on that now.\n\nIt is more than a check box feature, yes, but I am guessing few people\nare wanting the this for the actual features beyond check box.\n\n> Yes, there is value beyond the check-box, but in most cases those\n> values are limited considering the complexity of the features, and the\n> check-box is what most people are asking for, I think.\n> \n> For the users who ask on the lists for this feature, regularly, how many don’t\n> ask because they google or find prior responses on the list to the question of\n> if we have this capability? How do we know that their cases are “checkbox”? \n\nBecause I have rarely heard people articulate the value beyond check\nbox.I think there is value. I am going to try to articulate a case for this here.The first is that if people just want a \"checkbox\" then they can implement PostgreSQL in ways that have encryption at rest today. This includes using LUKS and the encryption options in pgbackrest. That's good enough for a checkbox. It isn't good enough for a real, secured instance however.There are a few problems with trying to do this for a secured instance. The first is that you have multiple links in the encryption chain, and the failure of any one of them ill lead to cleartext exposure of data files. This is not a problem for those who just want to tick a checkbox. Also the fact that backups and main systems are separately encrypted there (if the backups are encrypted at all) means that people have to choose between complicating a restore process and simply ditching encryption on the backup, which makes the checkbox somewhat pointless.Where I have usually seen this come up is in the question of \"how do you prevent the problem of someone pulling storage devices from your servers and taking them away to compromise your data?\" Physical security comes into it but often times people want more than that as an answer. I saw questions like that from external auditors when I was at Adjust.If you want to actually address that problem, then the current tooling is quite cumbersome. Yes you can do it, but it is very hard to make sure it has been fully secured and also very hard to monitor. TDE would make the setup and verification of this much easier. And in particular it solves a number of other issues that I can see arising from LUKS and similar approaches since it doesn't rely on the kernel to be able to translate plain text to and from cypher text.I have actually worked with folks who have PII and need to protect it and who currently use LUKS and pg_backrest to do so. I would be extremely happy to see TDE replace those for their needs. I can imagine that those who hold high value data would use it as well instead of these other more error prone and less secure setups. \n\n> Consider that there are standards groups which explicitly consider these attack\n> vectors and consider them important enough to require mitigations to address\n> those vectors. Do the end users of PG understand the attack vectors or why they\n> matter? Perhaps not, but just because they can’t articulate the reasoning does\n> NOT mean that the attack vector doesn’t exist or that their environment is\n> somehow immune to it- indeed, as the standards bodies surely know, the opposite\n> is true- they’re almost certainly at risk of those attack vectors and therefore\n> the standards bodies are absolutely justified in requiring them to provide a\n> solution. Treating these users as unimportant because they don’t have the depth\n> of understanding that we do or that the standards body does is not helping\n> them- it’s actively driving them away from PG. \n\nWell, then who is going to explain them here, because I have not heard\nthem yet.\n\n> The RLS arguments were that queries could expoose some of the underlying\n> data, but in summary, that was considered acceptable.\n> \n> This is an excellent point- and dovetails very nicely into my argument that\n> protecting primary data (what is provided by users and ends up in indexes and\n> heaps) is valuable even if we don’t (yet..) have protection for other parts of\n> the system. Reducing the size of the attack vector is absolutely useful,\n> especially when it’s such a large amount of the data in the system. Yes, we\n> should, and will, continue to improve- as we do with many features, but we\n> don’t need to wait for perfection to include this feature, just as with RLS and\n> numerous other features we have. \n\nThe issue is that you needed a certain type of user with a certain type\nof access to break RLS, while for this, writing to PGDATA is the simple\ncase for all the breakage, and the thing we are protecting with\nauthentication.\n\n> > > We, as a community, are clearly losing value by lack of this\n> capability,\n> > if by\n> > > no other measure than simply the numerous users of the commercial\n> > > implementations feeling that they simply can’t use PG without this\n> > feature, for\n> > > whatever their reasoning.\n> >\n> > That is true, but I go back to my concern over useful feature vs.\n> check\n> > box.\n> >\n> > While it’s easy to label something as checkbox, I don’t feel we have been\n> fair\n> \n> No, actually, it isn't. I am not sure why you are saying that.\n> \n> I’m confused as to what is required to label a feature as a “checkbox” feature\n> then. What did you us to make that determination of this feature? I’m happy to\n> be wrong here. \n\nI don't see the point in me continuing to reply here. You just seem to\ncontinue asking questions without actually thinking of what I am saying,\nand hope I get tired or something.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n-- Best Wishes,Chris TraversEfficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.http://www.efficito.com/learn_more",
"msg_date": "Tue, 28 Mar 2023 14:48:37 +0700",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "Greetings,\n\nI am including an updated version of this patch series; it has been rebased\nonto 6ec62b7799 and reworked somewhat.\n\nThe patches are as follows:\n\n0001 - doc updates\n0002 - Basic key management and cipher support\n0003 - Backend-related changes to support heap encryption\n0004 - modifications to bin tools and programs to manage key rotation and\nadd other knowledge\n0005 - Encrypted/authenticated WAL\n\nThese are very broad strokes at this point and should be split up a bit\nmore to make things more granular and easier to review, but I wanted to get\nthis update out.\n\nOf note, the encryption supported in this release as exposed to the\nheap-level is AES-XTS-128 and AES-XTS-256; there is built-in support for\nCTR and GCM, however based on other discussions related how to store the\nadditional authenticated data on the page, GCM has been removed from\nthe list of supported ciphers. This could certainly be enabled in the\nfuture, however the other pieces that this patchset provides would enable\nTDE without the additional block size/storage concerns.\n\nBest,\n\nDavid",
"msg_date": "Tue, 31 Oct 2023 16:23:17 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 04:23:17PM -0500, David Christensen wrote:\n> Greetings,\n> \n> I am including an updated version of this patch series; it has been rebased\n> onto 6ec62b7799 and reworked somewhat.\n> \n> The patches are as follows:\n> \n> 0001 - doc updates\n> 0002 - Basic key management and cipher support\n> 0003 - Backend-related changes to support heap encryption\n> 0004 - modifications to bin tools and programs to manage key rotation and add\n> other knowledge\n> 0005 - Encrypted/authenticated WAL\n> \n> These are very broad strokes at this point and should be split up a bit more to\n> make things more granular and easier to review, but I wanted to get this update\n> out.\n\nThis lacks temp table file encryption, right?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 17:30:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 4:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Oct 31, 2023 at 04:23:17PM -0500, David Christensen wrote:\n> > Greetings,\n> >\n> > I am including an updated version of this patch series; it has been\n> rebased\n> > onto 6ec62b7799 and reworked somewhat.\n> >\n> > The patches are as follows:\n> >\n> > 0001 - doc updates\n> > 0002 - Basic key management and cipher support\n> > 0003 - Backend-related changes to support heap encryption\n> > 0004 - modifications to bin tools and programs to manage key rotation\n> and add\n> > other knowledge\n> > 0005 - Encrypted/authenticated WAL\n> >\n> > These are very broad strokes at this point and should be split up a bit\n> more to\n> > make things more granular and easier to review, but I wanted to get this\n> update\n> > out.\n>\n> This lacks temp table file encryption, right?\n\n\nTemporary /files/ are handled in a different patch set and are not included\nhere (not sure of the status of integrating at this point). I believe that\nthis patch should handle temporary heap files just fine since they go\nthrough the Page API.\n\nOn Tue, Oct 31, 2023 at 4:30 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Oct 31, 2023 at 04:23:17PM -0500, David Christensen wrote:\n> Greetings,\n> \n> I am including an updated version of this patch series; it has been rebased\n> onto 6ec62b7799 and reworked somewhat.\n> \n> The patches are as follows:\n> \n> 0001 - doc updates\n> 0002 - Basic key management and cipher support\n> 0003 - Backend-related changes to support heap encryption\n> 0004 - modifications to bin tools and programs to manage key rotation and add\n> other knowledge\n> 0005 - Encrypted/authenticated WAL\n> \n> These are very broad strokes at this point and should be split up a bit more to\n> make things more granular and easier to review, but I wanted to get this update\n> out.\n\nThis lacks temp table file encryption, right?Temporary /files/ are handled in a different patch set and are not included here (not sure of the status of integrating at this point). I believe that this patch should handle temporary heap files just fine since they go through the Page API.",
"msg_date": "Tue, 31 Oct 2023 16:32:38 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 04:32:38PM -0500, David Christensen wrote:\n> On Tue, Oct 31, 2023 at 4:30 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Temporary /files/ are handled in a different patch set and are not included\n> here (not sure of the status of integrating at this point). I believe that\n> this patch should handle temporary heap files just fine since they go through\n> the Page API.\n\nYes, that's what I thought, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 31 Oct 2023 17:37:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Tue, 31 Oct 2023 at 22:23, David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Greetings,\n>\n> I am including an updated version of this patch series; it has been rebased onto 6ec62b7799 and reworked somewhat.\n>\n> The patches are as follows:\n>\n> 0001 - doc updates\n> 0002 - Basic key management and cipher support\n> 0003 - Backend-related changes to support heap encryption\n\nI'm quite surprised at the significant number of changes being made\noutside the core storage manager files. I thought that changing out\nmdsmgr with an encrypted smgr (that could wrap mdsmgr if so desired)\nwould be the most obvious change to implement cluster-wide encryption\nwith the least code touched, as relations don't need to know whether\nthe files they're writing are encrypted, right? Is there a reason to\nnot implement this at the smgr level that I overlooked in the\ndocumentation of these patches?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 2 Nov 2023 22:09:40 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-31 16:23:17 -0500, David Christensen wrote:\n> The patches are as follows:\n>\n> 0001 - doc updates\n> 0002 - Basic key management and cipher support\n> 0003 - Backend-related changes to support heap encryption\n> 0004 - modifications to bin tools and programs to manage key rotation and\n> add other knowledge\n> 0005 - Encrypted/authenticated WAL\n>\n> These are very broad strokes at this point and should be split up a bit\n> more to make things more granular and easier to review, but I wanted to get\n> this update out.\n\nYes, particularly 0003 really needs to be split - as is it's not easily\nreviewable.\n\n\n\n> From 327e86d52be1df8de9c3a324cb06b85ba5db9604 Mon Sep 17 00:00:00 2001\n> From: David Christensen <david@pgguru.net>\n> Date: Fri, 29 Sep 2023 15:16:00 -0400\n> Subject: [PATCH v3 5/5] Add encrypted/authenticated WAL\n>\n> When using an encrypted cluster, we need to ensure that the WAL is also\n> encrypted. While we could go with an page-based approach, we use instead a\n> per-record approach, using GCM for the encryption method and storing the AuthTag\n> in the xl_crc field.\n>\n> We change the xl_crc field to instead be a union struct, with a compile-time\n> adjustable size to allow us to customize the number of bytes allocated to the\n> GCM authtag. This allows us to easily adjust the size of bytes needed to\n> support our authentication. (Testing has included up to 12, but leaving at this\n> point to 4 due to keeping the size of the WAL records the same.)\n\nUgh, that'll be quite a bit of overhead in some workloads... You can't really\nuse such a build for non-encrypted workloads, making this a not very\ndeployable path...\n\n\n> @@ -905,20 +905,28 @@ XLogInsertRecord(XLogRecData *rdata,\n> \t{\n> \t\t/*\n> \t\t * Now that xl_prev has been filled in, calculate CRC of the record\n> -\t\t * header.\n> +\t\t * header. If we are using encrypted WAL, this CRC is overwritten by\n> +\t\t * the authentication tag, so just zero\n> \t\t */\n> -\t\trdata_crc = rechdr->xl_crc;\n> -\t\tCOMP_CRC32C(rdata_crc, rechdr, offsetof(XLogRecord, xl_crc));\n> -\t\tFIN_CRC32C(rdata_crc);\n> -\t\trechdr->xl_crc = rdata_crc;\n> +\t\tif (!encrypt_wal)\n> +\t\t{\n> +\t\t\trdata_crc = rechdr->xl_integrity.crc;\n> +\t\t\tCOMP_CRC32C(rdata_crc, rechdr, offsetof(XLogRecord, xl_integrity.crc));\n> +\t\t\tFIN_CRC32C(rdata_crc);\n> +\t\t\trechdr->xl_integrity.crc = rdata_crc;\n> +\t\t}\n> +\t\telse\n> +\t\t\tmemset(&rechdr->xl_integrity, 0, sizeof(rechdr->xl_integrity));\n\nWhy aren't you encrypting most of the data here? Just as for CRC computation,\nencrypting a large record in XLOG_BLCKSZ\n\n\n> * XLogRecordDataHeaderLong structs all begin with a single 'id' byte. It's\n> * used to distinguish between block references, and the main data structs.\n> */\n> +\n> +#define XL_AUTHTAG_SIZE 4\n> +#define XL_HEADER_PAD 2\n> +\n> typedef struct XLogRecord\n> {\n> \tuint32\t\txl_tot_len;\t\t/* total len of entire record */\n> @@ -45,14 +49,16 @@ typedef struct XLogRecord\n> \tXLogRecPtr\txl_prev;\t\t/* ptr to previous record in log */\n> \tuint8\t\txl_info;\t\t/* flag bits, see below */\n> \tRmgrId\t\txl_rmid;\t\t/* resource manager for this record */\n> -\t/* 2 bytes of padding here, initialize to zero */\n> -\tpg_crc32c\txl_crc;\t\t\t/* CRC for this record */\n> -\n> +\tuint8 xl_pad[XL_HEADER_PAD];\t\t/* required alignment padding */\n\nWhat does \"required\" mean here? And why is this defined in a separate define?\nAnd why do we need the explicit field here at all? The union will still\nprovide sufficient alignment for a uint32.\n\nIt also doesn't seem right to remove the comment about needing to zero out the\nspace.\n\n\n> From 7557acf60f52da4a86fd9f902bab4804c028dd4b Mon Sep 17 00:00:00 2001\n> From: David Christensen <david.christensen@crunchydata.com>\n> Date: Tue, 31 Oct 2023 15:24:02 -0400\n> Subject: [PATCH v3 3/5] Backend-related changes\n\n\n\n\n> --- a/src/backend/access/heap/rewriteheap.c\n> +++ b/src/backend/access/heap/rewriteheap.c\n> @@ -323,6 +323,11 @@ end_heap_rewrite(RewriteState state)\n> \t\t\t\t\t\tstate->rs_buffer,\n> \t\t\t\t\t\ttrue);\n>\n> +\t\tPageEncryptInplace(state->rs_buffer, MAIN_FORKNUM,\n> +\t\t\t\t\t\t RelationIsPermanent(state->rs_new_rel),\n> +\t\t\t\t\t\t state->rs_blockno,\n> +\t\t\t\t\t\t RelationGetSmgr(state->rs_new_rel)->smgr_rlocator.locator.relNumber\n> +\t\t\t);\n> \t\tPageSetChecksumInplace(state->rs_buffer, state->rs_blockno);\n>\n> \t\tsmgrextend(RelationGetSmgr(state->rs_new_rel), MAIN_FORKNUM,\n\nI don't think it's ok to have to make such changes in a bunch of places. I\nthink we need to add an abstraction layer between smgr and code like this\nfirst. Luckily I think Heikki was working abstracting away some of these\ndirect smgr* uses...\n\n\n\n> +Architecture\n> +------------\n> +\n> +Fundamentally, cluster file encryption must store data in a file system\n> +in such a way that the keys required to decrypt the file system data can\n> +only be accessed using somewhere outside of the file system itself. The\n> +external requirement can be someone typing in a passphrase, getting a\n> +key from a key management server (KMS), or decrypting a key stored in\n> +the file system using a hardware security module (HSM). The current\n> +architecture supports all of these methods, and includes sample scripts\n> +for them.\n> +\n> +The simplest method for accessing data keys using some external\n> +requirement would be to retrieve all data encryption keys from a KMS.\n> +However, retrieved keys would still need to be verified as valid. This\n> +method also introduces unacceptable complexity for simpler use-cases,\n> +like user-supplied passphrases or HSM usage. External key rotation\n> +would also be very hard since it would require re-encrypting all the\n> +file system data with the new externally-stored keys.\n> +\n> +For these reason, a two-tiered architecture is used, which uses two\n> +types of encryption keys: a key encryption key (KEK) and data encryption\n> +keys (DEK). The KEK should not be present unencrypted in the file system\n> +--- it should be supplied the user, stored externally (e.g., in a KMS)\n\n*by* the user?\n\n> +or stored in the file system encrypted with a HSM (e.g., PIV device).\n> +The DEK is used to encrypt database files and is stored in the same file\n> +system as the database but is encrypted using the KEK. Because the DEK\n> +is encrypted, its storage in the file system is no more of a security\n> +weakness and the storage of the encrypted database files in the same\n> +file system.\n\nAs is this paragraph doesn't really follow from the prior paragraph for\nme. That a KMS would be hard to use isn't obviously related to splitting the\nKEK and the DEK.\n\n\n\n> +Implementation\n> +--------------\n> +\n> +To enable cluster file encryption, the initdb option\n> +--cluster-key-command must be used, which specifies a command to\n> +retrieve the KEK.\n\nFWIW, I think \"cluster file encryption\" is somewhat ambiguous. That could also\nmean encrypting on the file system level or such.\n\n\n> initdb records the cluster_key_command in\n> +postgresql.conf. Every time the KEK is needed, the command is run and\n> +must return 64 hex characters which are decoded into the KEK. The\n> +command is called twice during initdb, and every time the server starts.\n> +initdb also sets the encryption method in controldata during server\n> +bootstrap.\n> +\n> +initdb runs \"postgres --boot\", which calls function\n> +kmgr.c::BootStrapKmgr(), which calls the cluster key command. The\n> +cluster key command returns a KEK which is used to encrypt random bytes\n> +for each DEK and writes them to the file system by\n> +kmgr.c::KmgrWriteCryptoKeys() (unless --copy-encryption-keys is used).\n> +Currently the DEK files are 0 and 1 and are stored in\n> +$PGDATA/pg_cryptokeys/live. The wrapped DEK files use Key Wrapping with\n> +Padding which verifies the validity of the KEK.\n> +\n> +initdb also does a non-boot backend start which calls\n> +kmgr.c::InitializeKmgr(), which calls the cluster key command a second\n> +time. This decrypts/unwraps the DEK keys and stores them in the shared\n> +memory structure KmgrShmem. This step also happens every time the server\n> +starts. Later patches will use the keys stored in KmgrShmem to\n> +encrypt/decrypt database files. KmgrShmem is erased via\n> +explicit_bzero() on server shutdown.\n\nI think this encodes too many details of how initdb works today. It seems\nlikely that nobody adding or removing a restart will think of updating this\nfile - nor should they have to. I'd just say that initdb starts/stops\npostgres multiple times.\n\n\n\n> +Initialization Vector\n> +---------------------\n> +\n> +Nonce means \"number used once\". An Initialization Vector (IV) is a\n> +specific type of nonce. That is, unique but not necessarily random or\n> +secret, as specified by the NIST\n> +(https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38a.pdf).\n> +To generate unique IVs, the NIST recommends two methods:\n> +\n> +\tThe first method is to apply the forward cipher function, under\n> +\tthe same key that is used for the encryption of the plaintext,\n> +\tto a nonce. The nonce must be a data block that is unique to\n> +\teach execution of the encryption operation. For example, the\n> +\tnonce may be a counter, as described in Appendix B, or a message\n> +\tnumber. The second method is to generate a random data block\n> +\tusing a FIPS-approved random number generator.\n> +\n> +We will use the first method to generate IVs. That is, select nonce\n> +carefully and use a cipher with the key to make it unique enough to use\n> +as an IV. The nonce selection for buffer encryption and WAL encryption\n> +are described below.\n> +\n> +If the IV was used more than once with the same key (and we only use one\n> +data encryption key), changes in the unencrypted data would be visible\n> +in the encrypted data.\n> +\n> +IV for Heap/Index Encryption\n> +- - - - - - - - - - - - - -\n> +\n> +To create the 16-byte IV needed by AES for each page version, we will\n> +use the page LSN (8 bytes) and page number (4 bytes).\n\nI still am quite quite unconvinced that using the LSN as a nonce is a good\ndesign decision.\n\n- LSNs can end up being reused after crash restarts\n- the LSN does not contain the timeline ID, if a standby is promoted, two\n systems can be using the same LSNs\n- The LSN does *NOT* actually change every time a page is modified. Even with\n wal_log_hint_bits, only the first hint bit modification to a page in a\n checkpoint cycles will cause WAL writes - and changing that would have\n a quite substantial overhead.\n\n\n> The LSN is ideal for use in the IV because it is +always increasing, and is\n> changed every time a page is updated. The +same LSN is never used for two\n> relations with different page contents.\n\nAs mentioned above, this is not true - the LSN does *NOT* change every time\nthe page is updated.\n\n\n> +However, the same LSN can be used in multiple pages in the same relation\n> +--- this can happen when a heap update expires an old tuple and adds a\n> +new tuple to another page. By adding the page number to the IV, we keep\n> +the IV unique.\n\nThere's many other ways that can happen.\n\n\n> +CREATE DATABASE can be run with two different strategies: FILE_COPY or\n> +WAL_LOG. If using WAL_LOG, the heap/index files are automatically\n> +rewritten with new LSNs as part of the copy operation and will get new\n> +IVs automatically.\n> +\n> +This approach still works with the older FILE_COPY stragegy; by not\n> +using the database id in the IV, CREATE DATABASE can copy the heap/index\n> +files from the old database to a new one without decryption/encryption.\n> +Both page copies are valid. Once a database changes its pages, it gets\n> +new LSNs, and hence new IV.\n> +\n> +As part of WAL logging, every change of a WAL-logged page gets a new\n> +LSN, and therefore a new IV automatically.\n> +\n> +However, the LSN must then be visible on encrypted pages, so we will not\n> +encrypt the LSN on the page. We will also not encrypt the CRC so\n> +pg_checksums can still check pages offline without access to the keys.\n\ns/crc/checksum/? The data-page-level checksum isn't a CRC.\n\n\n> +Non-Permanent Relations\n> +- - - - - - - - - - - -\n> +\n> +To avoid the overhead of generating WAL for non-permanent (unlogged and\n> +temporary) relations, we assign fake LSNs that are derived from a\n> +counter via xlog.c::GetFakeLSNForUnloggedRel(). (GiST also uses this\n> +counter for LSNs.) We also set a bit in the IV so the use of the same\n> +value for WAL (real) and fake LSNs will still generate unique IVs. Only\n> +main forks are encrypted, not init, vm, or fsm files.\n\nWhy aren't other forks encrypted? This seems like a very odd design to me.\n\n\n> +Hint Bits\n> +- - - - -\n> +\n> +For hint bit changes, the LSN normally doesn't change, which is a\n> +problem. By enabling wal_log_hints, you get full page writes to the WAL\n> +after the first hint bit change of the checkpoint. This is useful for\n> +two reasons. First, it generates a new LSN, which is needed for the IV\n> +to be secure. Second, full page images protect against torn pages,\n> +which is an even bigger requirement for encryption because the new LSN\n> +is re-encrypting the entire page, not just the hint bit changes. You\n> +can safely lose the hint bit changes, but you need to use the same LSN\n> +to decrypt the entire page, so a torn page with an LSN change cannot be\n> +decrypted. To prevent this, wal_log_hints guarantees that the\n> +pre-hint-bit version (and previous LSN version) of the page is restored.\n> +\n> +However, if a hint-bit-modified page is written to the file system\n> +during a checkpoint, and there is a later hint bit change switching the\n> +same page from clean to dirty during the same checkpoint, we need a new\n> +LSN, and wal_log_hints doesn't give us a new LSN here. The fix for this\n> +is to update the page LSN by writing a dummy WAL record via\n> +xloginsert.c::LSNForEncryption() in such cases.\n\nUgh, so that's really the plan. That's a substantial overhead in some common\nscenarios.\n\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Nov 2023 19:32:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-02 22:09:40 +0100, Matthias van de Meent wrote:\n> I'm quite surprised at the significant number of changes being made\n> outside the core storage manager files. I thought that changing out\n> mdsmgr with an encrypted smgr (that could wrap mdsmgr if so desired)\n> would be the most obvious change to implement cluster-wide encryption\n> with the least code touched, as relations don't need to know whether\n> the files they're writing are encrypted, right? Is there a reason to\n> not implement this at the smgr level that I overlooked in the\n> documentation of these patches?\n\nYou can't really implement encryption transparently inside an smgr without\nsignificant downsides. You need a way to store an initialization vector\nassociated with the page (or you can store that elsewhere, but then you've\ndoubled the worst cse amount of random reads/writes). The patch uses the LSN\nas the IV (which I doubt is a good idea). For authenticated encryption further\nadditional storage space is required.\n\nTo be able to to use the LSN as the IV, the patch needs to ensure that the LSN\nincreases in additional situations (a more aggressive version of\nwal_log_hint_bits) - which can't be done below smgr, where we don't know about\nwhat WAL logging was done. Nor can you easily just add space on the page below\nmd.c, for the purpose of storing an LSN independent IV and the authentication\ndata.\n\nYou could decide that the security that XTS provides is sufficient (XTS could\nbe used without a real IV, with some downsides), but we'd pretty much prevent\nourselves from ever implementing authenticated encryption.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 19:38:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On 2023-11-02 19:32:28 -0700, Andres Freund wrote:\n> > From 327e86d52be1df8de9c3a324cb06b85ba5db9604 Mon Sep 17 00:00:00 2001\n> > From: David Christensen <david@pgguru.net>\n> > Date: Fri, 29 Sep 2023 15:16:00 -0400\n> > Subject: [PATCH v3 5/5] Add encrypted/authenticated WAL\n> >\n> > When using an encrypted cluster, we need to ensure that the WAL is also\n> > encrypted. While we could go with an page-based approach, we use instead a\n> > per-record approach, using GCM for the encryption method and storing the AuthTag\n> > in the xl_crc field.\n\nWhat was the reason for this decision?\n\n?\n\n\n",
"msg_date": "Fri, 3 Nov 2023 19:53:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Sat, 4 Nov 2023 at 03:38, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-11-02 22:09:40 +0100, Matthias van de Meent wrote:\n> > I'm quite surprised at the significant number of changes being made\n> > outside the core storage manager files. I thought that changing out\n> > mdsmgr with an encrypted smgr (that could wrap mdsmgr if so desired)\n> > would be the most obvious change to implement cluster-wide encryption\n> > with the least code touched, as relations don't need to know whether\n> > the files they're writing are encrypted, right? Is there a reason to\n> > not implement this at the smgr level that I overlooked in the\n> > documentation of these patches?\n>\n> You can't really implement encryption transparently inside an smgr without\n> significant downsides. You need a way to store an initialization vector\n> associated with the page (or you can store that elsewhere, but then you've\n> doubled the worst cse amount of random reads/writes). The patch uses the LSN\n> as the IV (which I doubt is a good idea). For authenticated encryption further\n> additional storage space is required.\n\nI am unaware of any user of the smgr API that doesn't also use the\nbuffer cache, and thus implicitly the Page layout with PageHeader\n[^1]. The API of smgr is also tailored to page-sized quanta of data\nwith mostly relation-level information. I don't see why there would be\na veil covering the layout of Page for smgr when all other information\nalready points to the use of PageHeader and Page layouts. In my view,\nit would even make sense to allow the smgr to get exclusive access to\nsome part of the page in the current Page layout.\n\nYes, I agree that there will be an impact on usable page size if you\nwant authenticated encryption, and that AMs will indeed need to\naccount for storage space now being used by the smgr - inconvenient,\nbut it serves a purpose. That would happen regardless of whether smgr\nor some higher system decides where to store the data for encryption -\nas long as it is on the page, the AM effectively can't use those\nbytes.\nBut I'd say that's best solved by making the Page documentation and\nPageInit API explicit about the potential use of that space by the\nchosen storage method (encrypted, plain, ...) instead of requiring the\nvarious AMs to manually consider encryption when using Postgres' APIs\nfor writing data to disk without hitting shared buffers; page space\nmanagement is already a task of AMs, but handling the actual\nencryption is not.\n\nShould the AM really care whether the data on disk is encrypted or\nnot? I don't think so. When the disk contains encrypted bytes, but\nsmgrread() and smgrwrite() both produce and accept plaintext data,\nwho's going to complain? Requiring AMs to be mindful about encryption\non all common paths only adds pitfalls where encryption would be\nforgotten by the developer of AMs in one path or another.\n\n> To be able to to use the LSN as the IV, the patch needs to ensure that the LSN\n> increases in additional situations (a more aggressive version of\n> wal_log_hint_bits) - which can't be done below smgr, where we don't know about\n> what WAL logging was done. Nor can you easily just add space on the page below\n> md.c, for the purpose of storing an LSN independent IV and the authentication\n> data.\n\nI think that getting PageInit to allocate the smgr-specific area would\ntake some effort, too (which would potentially require adding some\nrelational context to PageInit, so that it knows which page of which\nrelation it is going to initialize), but IMHO that would be more\nnatural than requiring all index and table AMs to be aware the actual\nencryption of its pages and require manual handling of that encryption\nwhen the page needs to be written to disk, when it otherwise already\nconforms to the various buffer management and file extension APIs\ncurrently in use in PostgreSQL. I would expect \"transparent\" data\nencryption to be handled at the file write layer (i.e. smgr), not\ninside the AMs.\n\nKind regards,\n\nMatthias van de Meent\n\n[^1] ReadBuffer_common uses PageIsVerifiedExtended which verifies that\na page conforms with Postgres' Page layout if checksums are enabled.\nFurthermore, all builtin index AMs utilize pd_special, further\nimplying the use of a PageInit/PageHeader-based page layout.\nAdditionally, the heap tableAM also complies, and both FSM and VM also\nuse postgres' Page layout.\nAs for other AMs that I could check: bloom, rum, and pgvector's\nivfflat and hnsw all use page layouts.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 11:26:44 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Greetings,\n\nThanks for your feedback on this.\n\n* Andres Freund (andres@anarazel.de) wrote:\n> I still am quite quite unconvinced that using the LSN as a nonce is a good\n> design decision.\n\nThis is a really important part of the overall path to moving this\nforward, so I wanted to jump to it and have a specific discussion\naround this. I agree that there are downsides to using the LSN, some of\nwhich we could possibly address (eg: include the timeline ID in the IV),\nbut others that would be harder to deal with.\n\nThe question then is- what's the alternative?\n\nOne approach would be to change the page format to include space for an\nexplicit nonce. I don't see the community accepting such a new page\nformat as the only format we support though as that would mean no\npg_upgrade support along with wasted space if TDE isn't being used.\nIdeally, we'd instead be able to support multiple page formats where\nusers could decide when they create their cluster what features they\nwant- and luckily we even have such an effort underway with patches\nposted for review [1]. Certainly, with the base page-special-feature\npatch, we could have an option for users to choose that they want a\nbetter nonce than the LSN, or we could bundle that assumption in with,\nsay, the authenticated-encryption feature (if you want authenticated\nencryption, then you want more from the encryption system than the\nbasics, and therefore we presume you also want a better nonce than the\nLSN).\n\nAnother approach would be a separate fork, but that then has a number of\ndownsides too- every write has to touch that too, and a single page of\nnonces would cover a pretty large number of pages also.\n\nUltimately, I agree with you that the LSN isn't perfect and we really\nshouldn't be calling it 'ideal' as it isn't, and we can work to fix that\nlanguage in the patch, but the lack of any alternative being proposed\nthat might be acceptable makes this feel a bit like rock management [2].\n\nMy apologies if there's something good that's already been specifically\npushed and I just missed it; if so, a link to that suggestion and\ndiscussion would be greatly appreciated.\n\nThanks again!\n\nStephen\n\n[1]: https://commitfest.postgresql.org/45/3986/\n[2]: https://en.wikipedia.org/wiki/Wikipedia:Bring_me_a_rock ; though\nthat isn't great for a quick summary (which I tried to find on an\nisolated page somewhere and didn't).\n\nThe gist is, without a suggestion of things to try, we're left\nto our own devices to try and figure out things which might be\nsuccessful, only to have those turned down too when we come back with\nthem, see [1] for what feels like an example of this. Given your\nfeedback overall, which I'm very thankful for, I'm hopeful that you see\nthat this is, indeed, a useful feature that people are asking for and\ntherefore are willing to spend some time on it, but if the feedback is\nthat nothing on the page is acceptable to use for the nonce, we can't\nput the nonce somewhere else, and we can't change the page format, then\neverything else is just moving deck chairs around on the titanic that\nhas been this effort.",
"msg_date": "Mon, 6 Nov 2023 09:56:37 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 07:32:28PM -0700, Andres Freund wrote:\n> On 2023-10-31 16:23:17 -0500, David Christensen wrote:\n> > +Implementation\n> > +--------------\n> > +\n> > +To enable cluster file encryption, the initdb option\n> > +--cluster-key-command must be used, which specifies a command to\n> > +retrieve the KEK.\n> \n> FWIW, I think \"cluster file encryption\" is somewhat ambiguous. That could also\n> mean encrypting on the file system level or such.\n\nWe could call it:\n\n* cluster data file encryption\n* cluster data encryption\n* database cluster encryption\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 11:04:04 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 09:56:37AM -0500, Stephen Frost wrote:\n> The gist is, without a suggestion of things to try, we're left\n> to our own devices to try and figure out things which might be\n> successful, only to have those turned down too when we come back with\n> them, see [1] for what feels like an example of this. Given your\n> feedback overall, which I'm very thankful for, I'm hopeful that you see\n> that this is, indeed, a useful feature that people are asking for and\n> therefore are willing to spend some time on it, but if the feedback is\n> that nothing on the page is acceptable to use for the nonce, we can't\n> put the nonce somewhere else, and we can't change the page format, then\n> everything else is just moving deck chairs around on the titanic that\n> has been this effort.\n\nYeah, I know the feeling, though I thought XTS was immune enough to\nnonce/LSN reuse that it was acceptable.\n\nWhat got me sunk on the feature was the complexity of adding temporary\nfile encryption support and that tipped the scales in the negative for\nme in community value of the feature vs. added complexity. (Yeah, I used\na Titanic reference in the last sentence. ;-) ) However, I am open to\nthe community value and complexity values changing over time. My blog\npost on the topic:\n\n\thttps://momjian.us/main/blogs/pgblog/2023.html#October_19_2023\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 11:18:55 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Hi, thanks for the detailed feedback here.\n\nI do think it's worth addressing the question Stephen raised as far as what\nwe use for the IV[1]; whether LSN or something else entirely, and if so\nwhat. The choice of LSN here is fairly fundamental to the existing\nimplementation, so if we decide to do something different some of this\nmight be moot.\n\nBest,\n\nDavid\n\n[1]\nhttps://www.mail-archive.com/pgsql-hackers@lists.postgresql.org/msg154613.html\n\nHi, thanks for the detailed feedback here.I do think it's worth addressing the question Stephen raised as far as what we use for the IV[1]; whether LSN or something else entirely, and if so what. The choice of LSN here is fairly fundamental to the existing implementation, so if we decide to do something different some of this might be moot.Best,David[1] https://www.mail-archive.com/pgsql-hackers@lists.postgresql.org/msg154613.html",
"msg_date": "Mon, 6 Nov 2023 10:32:30 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Fri, Nov 3, 2023 at 9:53 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2023-11-02 19:32:28 -0700, Andres Freund wrote:\n> > > From 327e86d52be1df8de9c3a324cb06b85ba5db9604 Mon Sep 17 00:00:00 2001\n> > > From: David Christensen <david@pgguru.net>\n> > > Date: Fri, 29 Sep 2023 15:16:00 -0400\n> > > Subject: [PATCH v3 5/5] Add encrypted/authenticated WAL\n> > >\n> > > When using an encrypted cluster, we need to ensure that the WAL is also\n> > > encrypted. While we could go with an page-based approach, we use\n> instead a\n> > > per-record approach, using GCM for the encryption method and storing\n> the AuthTag\n> > > in the xl_crc field.\n>\n> What was the reason for this decision?\n>\n\nThis was mainly to prevent IV reuse by using a per-record encryption rather\nthan per-page, since partial writes out on the WAL buffer would result in\nreuse there. This was somewhat of an experiment since authenticated data\nper record was basically equivalent in function to the CRC.\n\nThere was a switch here so normal clusters use the crc field with the\nexisting CRC implementation, only encrypted clusters use this alternate\napproach.\n\nOn Fri, Nov 3, 2023 at 9:53 PM Andres Freund <andres@anarazel.de> wrote:On 2023-11-02 19:32:28 -0700, Andres Freund wrote:\n> > From 327e86d52be1df8de9c3a324cb06b85ba5db9604 Mon Sep 17 00:00:00 2001\n> > From: David Christensen <david@pgguru.net>\n> > Date: Fri, 29 Sep 2023 15:16:00 -0400\n> > Subject: [PATCH v3 5/5] Add encrypted/authenticated WAL\n> >\n> > When using an encrypted cluster, we need to ensure that the WAL is also\n> > encrypted. While we could go with an page-based approach, we use instead a\n> > per-record approach, using GCM for the encryption method and storing the AuthTag\n> > in the xl_crc field.\n\nWhat was the reason for this decision? This was mainly to prevent IV reuse by using a per-record encryption rather than per-page, since partial writes out on the WAL buffer would result in reuse there. This was somewhat of an experiment since authenticated data per record was basically equivalent in function to the CRC.There was a switch here so normal clusters use the crc field with the existing CRC implementation, only encrypted clusters use this alternate approach.",
"msg_date": "Mon, 6 Nov 2023 10:37:39 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Mon, Nov 6, 2023 at 09:56:37AM -0500, Stephen Frost wrote:\n> > The gist is, without a suggestion of things to try, we're left\n> > to our own devices to try and figure out things which might be\n> > successful, only to have those turned down too when we come back with\n> > them, see [1] for what feels like an example of this. Given your\n> > feedback overall, which I'm very thankful for, I'm hopeful that you see\n> > that this is, indeed, a useful feature that people are asking for and\n> > therefore are willing to spend some time on it, but if the feedback is\n> > that nothing on the page is acceptable to use for the nonce, we can't\n> > put the nonce somewhere else, and we can't change the page format, then\n> > everything else is just moving deck chairs around on the titanic that\n> > has been this effort.\n> \n> Yeah, I know the feeling, though I thought XTS was immune enough to\n> nonce/LSN reuse that it was acceptable.\n\nUltimately it depends on the attack vector you're trying to address, but\ngenerally, I think you're right about the XTS tweak reuse not being that\nbig of a deal. XTS isn't CTR or GCM.\n\nWith FDE (full disk encryption) you're expecting the attacker to steal\nthe physical laptop, hard drive, etc, generally, and so the downside of\nusing the same tweak with XTS over and over again isn't that bad (and is\nwhat most FDE solutions do, aiui, by simply using the sector number; we\ncould do something similar to that by using the relfilenode + block\nnumber) because that re-use is a problem if the attacker is able to see\nmultiple copies of the same block over time where the block has been\nencrypted with different data but the same key and tweak.\n\nUsing the LSN actually is better than what the FDE solutions do because\nthe LSN varies much more often than the sector number. Sure, it doesn't\nchange with every write and maybe an attacker could glean something from\nthat, but that's quite narrow. The downside from the LSN based approach\nwith XTS is probably more that using the LSN means that we can't encrypt\nthe LSN itself and that is a leak too- but then again, we leak that\nthrough the simple WAL filenames too, to some extent, so it doesn't\nstrike me as a huge issue.\n\nXTS as a block cipher doesn't suffer from the IV-reuse issue that you\nhave with streaming ciphers where the same key+IV and different data\nleads to being able to trivally retrive the plaintext though and I worry\nthat maybe that's what people were thinking.\n\nThe README and comments I don't think were terribly clear on this and I\nthink may have even been from back when CTR was being considered, where\nIV reuse would have resulted in plaintext being trivially available.\n\n> What got me sunk on the feature was the complexity of adding temporary\n> file encryption support and that tipped the scales in the negative for\n> me in community value of the feature vs. added complexity. (Yeah, I used\n> a Titanic reference in the last sentence. ;-) ) However, I am open to\n> the community value and complexity values changing over time. My blog\n> post on the topic:\n\nWe do need to address the temporary file situation too and we do have a\nbit of an issue that how we deal with temporary files today in PG isn't\nvery consistent and there's too many ways to do that. There's a patch\nthat works on that, though it has some bitrot that we're working on\naddressing currently.\n\nThere is value in simply fixing that situation wrt temporary file\nmanagement independent of encryption, though of course then encryption\nof those temporary files becomes much simpler.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 7 Nov 2023 17:40:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-06 09:56:37 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > I still am quite quite unconvinced that using the LSN as a nonce is a good\n> > design decision.\n> \n> This is a really important part of the overall path to moving this\n> forward, so I wanted to jump to it and have a specific discussion\n> around this. I agree that there are downsides to using the LSN, some of\n> which we could possibly address (eg: include the timeline ID in the IV),\n> but others that would be harder to deal with.\n\n> The question then is- what's the alternative?\n> \n> One approach would be to change the page format to include space for an\n> explicit nonce. I don't see the community accepting such a new page\n> format as the only format we support though as that would mean no\n> pg_upgrade support along with wasted space if TDE isn't being used.\n\nRight.\n\n\n> Ideally, we'd instead be able to support multiple page formats where\n> users could decide when they create their cluster what features they\n> want- and luckily we even have such an effort underway with patches\n> posted for review [1].\n\nI think there are some details wrong with that patch - IMO the existing macros\nshould just continue to work as-is and instead the places that want the more\nnarrow definition should be moved to the new macros and it changes places that\nshould continue to use compile time constants - but it doesn't seem like a\nfundamentally bad idea to me. I certainly like it much better than making the\npage size runtime configurable.\n\n(I'll try to reply with the above points to [1])\n\n\n> Certainly, with the base page-special-feature patch, we could have an option\n> for users to choose that they want a better nonce than the LSN, or we could\n> bundle that assumption in with, say, the authenticated-encryption feature\n> (if you want authenticated encryption, then you want more from the\n> encryption system than the basics, and therefore we presume you also want a\n> better nonce than the LSN).\n\nI don't think we should support using the LSN as a nonce if we have an\nalternative. The cost and complexity overhead is just not worth it. Yes,\nit'll be harder for users to migrate to encryption, but adding complexity\nelsewhere in the system to get an inferior result isn't worth it.\n\n\n> Another approach would be a separate fork, but that then has a number of\n> downsides too- every write has to touch that too, and a single page of\n> nonces would cover a pretty large number of pages also.\n\nYea, the costs of doing so is nontrivial. If you were trying to implement\nencryption on the smgr level - which I doubt you should but am not certain\nabout - my suggestion would be to interleave pages with metadata like nonces\nand AEAD with the data pages. I.e. one metadata page would be followed by\n (BLCKSZ - SizeOfPageHeaderData) / (sizeof(nonce) + sizeof(AEAD))\npages containing actual relation data. That way you still get decent locality\nduring scans and writes.\n\nRelation forks were a mistake, we shouldn't use them in more places.\n\n\nI think it'd be much better if we also encrypted forks, rather than just the\nmain fork...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Nov 2023 15:49:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-06 11:26:44 +0100, Matthias van de Meent wrote:\n> On Sat, 4 Nov 2023 at 03:38, Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-11-02 22:09:40 +0100, Matthias van de Meent wrote:\n> > > I'm quite surprised at the significant number of changes being made\n> > > outside the core storage manager files. I thought that changing out\n> > > mdsmgr with an encrypted smgr (that could wrap mdsmgr if so desired)\n> > > would be the most obvious change to implement cluster-wide encryption\n> > > with the least code touched, as relations don't need to know whether\n> > > the files they're writing are encrypted, right? Is there a reason to\n> > > not implement this at the smgr level that I overlooked in the\n> > > documentation of these patches?\n> >\n> > You can't really implement encryption transparently inside an smgr without\n> > significant downsides. You need a way to store an initialization vector\n> > associated with the page (or you can store that elsewhere, but then you've\n> > doubled the worst cse amount of random reads/writes). The patch uses the LSN\n> > as the IV (which I doubt is a good idea). For authenticated encryption further\n> > additional storage space is required.\n> \n> I am unaware of any user of the smgr API that doesn't also use the\n> buffer cache, and thus implicitly the Page layout with PageHeader\n> [^1]\n\nEverything indeed uses a PageHeader - but there are a number of places that do\n*not* utilize pd_lower/upper/special. E.g. visibilitymap.c just assumes that\nthose fields are zero - and changing that wouldn't be trivial / free, because\nwe do a lot of bitmasking/shifting with constants derived from\n\n#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))\n\nwhich obviously wouldn't be constant anymore if you could reserve space on the\npage.\n\n\n> The API of smgr is also tailored to page-sized quanta of data\n> with mostly relation-level information. I don't see why there would be\n> a veil covering the layout of Page for smgr when all other information\n> already points to the use of PageHeader and Page layouts. In my view,\n> it would even make sense to allow the smgr to get exclusive access to\n> some part of the page in the current Page layout.\n> \n> Yes, I agree that there will be an impact on usable page size if you\n> want authenticated encryption, and that AMs will indeed need to\n> account for storage space now being used by the smgr - inconvenient,\n> but it serves a purpose. That would happen regardless of whether smgr\n> or some higher system decides where to store the data for encryption -\n> as long as it is on the page, the AM effectively can't use those\n> bytes.\n> But I'd say that's best solved by making the Page documentation and\n> PageInit API explicit about the potential use of that space by the\n> chosen storage method (encrypted, plain, ...) instead of requiring the\n> various AMs to manually consider encryption when using Postgres' APIs\n> for writing data to disk without hitting shared buffers; page space\n> management is already a task of AMs, but handling the actual\n> encryption is not.\n\nI don't particularly disagree with any detail here - but to me reserving space\nfor nonces etc at PageInit() time pretty much is the opposite of handling\nencryption inside smgr.\n\n\n> Should the AM really care whether the data on disk is encrypted or\n> not? I don't think so. When the disk contains encrypted bytes, but\n> smgrread() and smgrwrite() both produce and accept plaintext data,\n> who's going to complain? Requiring AMs to be mindful about encryption\n> on all common paths only adds pitfalls where encryption would be\n> forgotten by the developer of AMs in one path or another.\n\nI agree with that - I think the way the patch currently is designed is not\nright.\n\n\nThere's other stuff you can't trivially do at the smgr level. E.g. if\nchecksums or encryption is enabled, you need to copy the buffer to compute\nchecksums / do IO if in shared buffers, because somebody could set a hint bit\neven with just a shared content lock. But you don't need that when coming from\nprivate buffers during index builds.\n\n\n\n> I think that getting PageInit to allocate the smgr-specific area would\n> take some effort, too (which would potentially require adding some\n> relational context to PageInit, so that it knows which page of which\n> relation it is going to initialize), but IMHO that would be more\n> natural than requiring all index and table AMs to be aware the actual\n> encryption of its pages and require manual handling of that encryption\n> when the page needs to be written to disk, when it otherwise already\n> conforms to the various buffer management and file extension APIs\n> currently in use in PostgreSQL. I would expect \"transparent\" data\n> encryption to be handled at the file write layer (i.e. smgr), not\n> inside the AMs.\n\nAs mentioned above - I agree that the relevant code shouldn't be in index\nAMs. But I somewhat doubt that smgr is the right level either. For one, the\nplace computing checksums needs awareness of locking / sharing semantics as\nwell as knowledge about WAL logging. That's IMO above smgr. For another, if we\never got another smgr implementation - should it have to reimplement\nencryption?\n\nISTM that there's a layer missing. Places bypassing bufmgr.c currently need\ntheir own handling of checksums (and in the future encryption), because the\nrelevant bufmgr.c code can't be reached without pages in the buffer\npool. Which is why we have PageIsVerifiedExtended() and\nPageSetChecksumInplace() calls in gist, hash, heapam, nbtree ... IMO when\nchecksums were added, we should have added the proper abstraction layer\ninstead of littering the code with redundant copies.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Nov 2023 16:47:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 6:47 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-11-06 11:26:44 +0100, Matthias van de Meent wrote:\n> > On Sat, 4 Nov 2023 at 03:38, Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-11-02 22:09:40 +0100, Matthias van de Meent wrote:\n> > > > I'm quite surprised at the significant number of changes being made\n> > > > outside the core storage manager files. I thought that changing out\n> > > > mdsmgr with an encrypted smgr (that could wrap mdsmgr if so desired)\n> > > > would be the most obvious change to implement cluster-wide encryption\n> > > > with the least code touched, as relations don't need to know whether\n> > > > the files they're writing are encrypted, right? Is there a reason to\n> > > > not implement this at the smgr level that I overlooked in the\n> > > > documentation of these patches?\n> > >\n> > > You can't really implement encryption transparently inside an smgr\n> without\n> > > significant downsides. You need a way to store an initialization vector\n> > > associated with the page (or you can store that elsewhere, but then\n> you've\n> > > doubled the worst cse amount of random reads/writes). The patch uses\n> the LSN\n> > > as the IV (which I doubt is a good idea). For authenticated encryption\n> further\n> > > additional storage space is required.\n> >\n> > I am unaware of any user of the smgr API that doesn't also use the\n> > buffer cache, and thus implicitly the Page layout with PageHeader\n> > [^1]\n>\n> Everything indeed uses a PageHeader - but there are a number of places\n> that do\n> *not* utilize pd_lower/upper/special. E.g. visibilitymap.c just assumes\n> that\n> those fields are zero - and changing that wouldn't be trivial / free,\n> because\n> we do a lot of bitmasking/shifting with constants derived from\n>\n> #define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))\n>\n> which obviously wouldn't be constant anymore if you could reserve space on\n> the\n> page.\n>\n\nWhile not constants, I was able to get this working with variable values\nhere in a way that did not have the overhead of the original patch for the\nvismap specifically using Montgomery Multiplication for division/mod. This\nwas actually the heaviest of the changes from moving to runtime-calculated,\nso we might be able to use this approach in this specific case even if only\nthis change is required for this specific fork.\n\n\n> > The API of smgr is also tailored to page-sized quanta of data\n> > with mostly relation-level information. I don't see why there would be\n> > a veil covering the layout of Page for smgr when all other information\n> > already points to the use of PageHeader and Page layouts. In my view,\n> > it would even make sense to allow the smgr to get exclusive access to\n> > some part of the page in the current Page layout.\n> >\n> > Yes, I agree that there will be an impact on usable page size if you\n> > want authenticated encryption, and that AMs will indeed need to\n> > account for storage space now being used by the smgr - inconvenient,\n> > but it serves a purpose. That would happen regardless of whether smgr\n> > or some higher system decides where to store the data for encryption -\n> > as long as it is on the page, the AM effectively can't use those\n> > bytes.\n> > But I'd say that's best solved by making the Page documentation and\n> > PageInit API explicit about the potential use of that space by the\n> > chosen storage method (encrypted, plain, ...) instead of requiring the\n> > various AMs to manually consider encryption when using Postgres' APIs\n> > for writing data to disk without hitting shared buffers; page space\n> > management is already a task of AMs, but handling the actual\n> > encryption is not.\n>\n> I don't particularly disagree with any detail here - but to me reserving\n> space\n> for nonces etc at PageInit() time pretty much is the opposite of handling\n> encryption inside smgr.\n>\n\nOriginally, I was anticipating that we might want different space amounts\nreserved on different classes of pages (apart from encryption), so while\nwe'd be storing the default page reserved size in pg_control we'd not be\nlimited to this in the structure of the page calls. We could presumably\njust move the logic into PageInit() itself if every reserved allocation is\nthe same and individual call sites wouldn't need to know about it. The\ncall sites do have more context as to the requirements of the page or the\n\"type\" of page in play, which if we made it dependent on page type would\nneed to get passed in somehow, which was where the reserved_page_size\nparameter came in to the current patch.\n\n>\n> > Should the AM really care whether the data on disk is encrypted or\n> > not? I don't think so. When the disk contains encrypted bytes, but\n> > smgrread() and smgrwrite() both produce and accept plaintext data,\n> > who's going to complain? Requiring AMs to be mindful about encryption\n> > on all common paths only adds pitfalls where encryption would be\n> > forgotten by the developer of AMs in one path or another.\n>\n> I agree with that - I think the way the patch currently is designed is not\n> right.\n>\n\nThe things that need to care tend to be the same places that need to care\nabout setting checksums, that or being aware of the LSNs in play or needing\nto be set. I'd agree that a common interface for \"get this page ready for\nwriting to storage\" and \"get this page converted from storage\" which could\nhandle both checksums, encryption, or additional page features would make\nsense. (I doubt we'd want hooks to support page in/page out, but if we\n/did/ want that, this'd also likely live there.)\n\nThere's other stuff you can't trivially do at the smgr level. E.g. if\n> checksums or encryption is enabled, you need to copy the buffer to compute\n> checksums / do IO if in shared buffers, because somebody could set a hint\n> bit\n> even with just a shared content lock. But you don't need that when coming\n> from\n> private buffers during index builds.\n>\n>\n>\n> > I think that getting PageInit to allocate the smgr-specific area would\n> > take some effort, too (which would potentially require adding some\n> > relational context to PageInit, so that it knows which page of which\n> > relation it is going to initialize), but IMHO that would be more\n> > natural than requiring all index and table AMs to be aware the actual\n> > encryption of its pages and require manual handling of that encryption\n> > when the page needs to be written to disk, when it otherwise already\n> > conforms to the various buffer management and file extension APIs\n> > currently in use in PostgreSQL. I would expect \"transparent\" data\n> > encryption to be handled at the file write layer (i.e. smgr), not\n> > inside the AMs.\n>\n> As mentioned above - I agree that the relevant code shouldn't be in index\n> AMs. But I somewhat doubt that smgr is the right level either. For one, the\n> place computing checksums needs awareness of locking / sharing semantics as\n> well as knowledge about WAL logging. That's IMO above smgr. For another,\n> if we\n> ever got another smgr implementation - should it have to reimplement\n> encryption?\n>\n> ISTM that there's a layer missing. Places bypassing bufmgr.c currently need\n> their own handling of checksums (and in the future encryption), because the\n> relevant bufmgr.c code can't be reached without pages in the buffer\n> pool. Which is why we have PageIsVerifiedExtended() and\n> PageSetChecksumInplace() calls in gist, hash, heapam, nbtree ... IMO when\n> checksums were added, we should have added the proper abstraction layer\n> instead of littering the code with redundant copies.\n\n\nAgreed that a fixup patch to add /something/ here would be good. A concern\nhere is what context would need to be passed in; certainly with only\nchecksums just a Page is sufficient, but if we have AAD we'd need to be\nable to pass that in or otherwise be able to identify it in the page. As a\npoint of references, the existing GCM patch authenticates all unencrypted\nheader fields (i.e., PageHeaderData up to the pd_special field), plus the\nRelFileNumber and BlockNumber, of which we'd need to pass in RelFileNumber\nand BlockNumber (and presumably would want the ForkNum as well if expanding\nthe set of authenticated data). To some extent, we can punt on some of\nthis, as the existing call sites have been modified in this patch to pass\nthat info in already, so it's really about how we marshal that data.\n\nThanks,\n\nDavid\n\nOn Tue, Nov 7, 2023 at 6:47 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-11-06 11:26:44 +0100, Matthias van de Meent wrote:\n> On Sat, 4 Nov 2023 at 03:38, Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-11-02 22:09:40 +0100, Matthias van de Meent wrote:\n> > > I'm quite surprised at the significant number of changes being made\n> > > outside the core storage manager files. I thought that changing out\n> > > mdsmgr with an encrypted smgr (that could wrap mdsmgr if so desired)\n> > > would be the most obvious change to implement cluster-wide encryption\n> > > with the least code touched, as relations don't need to know whether\n> > > the files they're writing are encrypted, right? Is there a reason to\n> > > not implement this at the smgr level that I overlooked in the\n> > > documentation of these patches?\n> >\n> > You can't really implement encryption transparently inside an smgr without\n> > significant downsides. You need a way to store an initialization vector\n> > associated with the page (or you can store that elsewhere, but then you've\n> > doubled the worst cse amount of random reads/writes). The patch uses the LSN\n> > as the IV (which I doubt is a good idea). For authenticated encryption further\n> > additional storage space is required.\n> \n> I am unaware of any user of the smgr API that doesn't also use the\n> buffer cache, and thus implicitly the Page layout with PageHeader\n> [^1]\n\nEverything indeed uses a PageHeader - but there are a number of places that do\n*not* utilize pd_lower/upper/special. E.g. visibilitymap.c just assumes that\nthose fields are zero - and changing that wouldn't be trivial / free, because\nwe do a lot of bitmasking/shifting with constants derived from\n\n#define MAPSIZE (BLCKSZ - MAXALIGN(SizeOfPageHeaderData))\n\nwhich obviously wouldn't be constant anymore if you could reserve space on the\npage.While not constants, I was able to get this working with variable values here in a way that did not have the overhead of the original patch for the vismap specifically using Montgomery Multiplication for division/mod. This was actually the heaviest of the changes from moving to runtime-calculated, so we might be able to use this approach in this specific case even if only this change is required for this specific fork. \n> The API of smgr is also tailored to page-sized quanta of data\n> with mostly relation-level information. I don't see why there would be\n> a veil covering the layout of Page for smgr when all other information\n> already points to the use of PageHeader and Page layouts. In my view,\n> it would even make sense to allow the smgr to get exclusive access to\n> some part of the page in the current Page layout.\n> \n> Yes, I agree that there will be an impact on usable page size if you\n> want authenticated encryption, and that AMs will indeed need to\n> account for storage space now being used by the smgr - inconvenient,\n> but it serves a purpose. That would happen regardless of whether smgr\n> or some higher system decides where to store the data for encryption -\n> as long as it is on the page, the AM effectively can't use those\n> bytes.\n> But I'd say that's best solved by making the Page documentation and\n> PageInit API explicit about the potential use of that space by the\n> chosen storage method (encrypted, plain, ...) instead of requiring the\n> various AMs to manually consider encryption when using Postgres' APIs\n> for writing data to disk without hitting shared buffers; page space\n> management is already a task of AMs, but handling the actual\n> encryption is not.\n\nI don't particularly disagree with any detail here - but to me reserving space\nfor nonces etc at PageInit() time pretty much is the opposite of handling\nencryption inside smgr.Originally, I was anticipating that we might want different space amounts reserved on different classes of pages (apart from encryption), so while we'd be storing the default page reserved size in pg_control we'd not be limited to this in the structure of the page calls. We could presumably just move the logic into PageInit() itself if every reserved allocation is the same and individual call sites wouldn't need to know about it. The call sites do have more context as to the requirements of the page or the \"type\" of page in play, which if we made it dependent on page type would need to get passed in somehow, which was where the reserved_page_size parameter came in to the current patch.\n\n> Should the AM really care whether the data on disk is encrypted or\n> not? I don't think so. When the disk contains encrypted bytes, but\n> smgrread() and smgrwrite() both produce and accept plaintext data,\n> who's going to complain? Requiring AMs to be mindful about encryption\n> on all common paths only adds pitfalls where encryption would be\n> forgotten by the developer of AMs in one path or another.\n\nI agree with that - I think the way the patch currently is designed is not\nright.The things that need to care tend to be the same places that need to care about setting checksums, that or being aware of the LSNs in play or needing to be set. I'd agree that a common interface for \"get this page ready for writing to storage\" and \"get this page converted from storage\" which could handle both checksums, encryption, or additional page features would make sense. (I doubt we'd want hooks to support page in/page out, but if we /did/ want that, this'd also likely live there.)There's other stuff you can't trivially do at the smgr level. E.g. if\nchecksums or encryption is enabled, you need to copy the buffer to compute\nchecksums / do IO if in shared buffers, because somebody could set a hint bit\neven with just a shared content lock. But you don't need that when coming from\nprivate buffers during index builds.\n\n\n\n> I think that getting PageInit to allocate the smgr-specific area would\n> take some effort, too (which would potentially require adding some\n> relational context to PageInit, so that it knows which page of which\n> relation it is going to initialize), but IMHO that would be more\n> natural than requiring all index and table AMs to be aware the actual\n> encryption of its pages and require manual handling of that encryption\n> when the page needs to be written to disk, when it otherwise already\n> conforms to the various buffer management and file extension APIs\n> currently in use in PostgreSQL. I would expect \"transparent\" data\n> encryption to be handled at the file write layer (i.e. smgr), not\n> inside the AMs.\n\nAs mentioned above - I agree that the relevant code shouldn't be in index\nAMs. But I somewhat doubt that smgr is the right level either. For one, the\nplace computing checksums needs awareness of locking / sharing semantics as\nwell as knowledge about WAL logging. That's IMO above smgr. For another, if we\never got another smgr implementation - should it have to reimplement\nencryption?\n\nISTM that there's a layer missing. Places bypassing bufmgr.c currently need\ntheir own handling of checksums (and in the future encryption), because the\nrelevant bufmgr.c code can't be reached without pages in the buffer\npool. Which is why we have PageIsVerifiedExtended() and\nPageSetChecksumInplace() calls in gist, hash, heapam, nbtree ... IMO when\nchecksums were added, we should have added the proper abstraction layer\ninstead of littering the code with redundant copies.Agreed that a fixup patch to add /something/ here would be good. A concern here is what context would need to be passed in; certainly with only checksums just a Page is sufficient, but if we have AAD we'd need to be able to pass that in or otherwise be able to identify it in the page. As a point of references, the existing GCM patch authenticates all unencrypted header fields (i.e., PageHeaderData up to the pd_special field), plus the RelFileNumber and BlockNumber, of which we'd need to pass in RelFileNumber and BlockNumber (and presumably would want the ForkNum as well if expanding the set of authenticated data). To some extent, we can punt on some of this, as the existing call sites have been modified in this patch to pass that info in already, so it's really about how we marshal that data.Thanks,David",
"msg_date": "Wed, 8 Nov 2023 16:47:06 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 5:49 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-11-06 09:56:37 -0500, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > I still am quite quite unconvinced that using the LSN as a nonce is a\n> good\n> > > design decision.\n> >\n> > This is a really important part of the overall path to moving this\n> > forward, so I wanted to jump to it and have a specific discussion\n> > around this. I agree that there are downsides to using the LSN, some of\n> > which we could possibly address (eg: include the timeline ID in the IV),\n> > but others that would be harder to deal with.\n>\n> > The question then is- what's the alternative?\n> >\n> > One approach would be to change the page format to include space for an\n> > explicit nonce. I don't see the community accepting such a new page\n> > format as the only format we support though as that would mean no\n> > pg_upgrade support along with wasted space if TDE isn't being used.\n>\n> Right.\n>\n\nHmm, if we /were/ to introduce some sort of page format change, Couldn't\nthat be a use case for modifying the pd_version field? Could v4 pages be\nread in and written out as v5 pages with different interpretations?\n\n\n> > Ideally, we'd instead be able to support multiple page formats where\n> > users could decide when they create their cluster what features they\n> > want- and luckily we even have such an effort underway with patches\n> > posted for review [1].\n>\n> I think there are some details wrong with that patch - IMO the existing\n> macros\n> should just continue to work as-is and instead the places that want the\n> more\n> narrow definition should be moved to the new macros and it changes places\n> that\n> should continue to use compile time constants - but it doesn't seem like a\n> fundamentally bad idea to me. I certainly like it much better than making\n> the\n> page size runtime configurable.\n>\n\nThere had been some discussion about this WRT renaming macros and the like\n(constants, etc)—I think a new pass eliminating the variable blocksize\npieces and seeing if we can minimize churn here is worthwhile, will take a\nlook and see what the minimally-viable set of changes is here.\n\n\n> (I'll try to reply with the above points to [1])\n>\n>\n> > Certainly, with the base page-special-feature patch, we could have an\n> option\n> > for users to choose that they want a better nonce than the LSN, or we\n> could\n> > bundle that assumption in with, say, the authenticated-encryption feature\n> > (if you want authenticated encryption, then you want more from the\n> > encryption system than the basics, and therefore we presume you also\n> want a\n> > better nonce than the LSN).\n>\n> I don't think we should support using the LSN as a nonce if we have an\n> alternative. The cost and complexity overhead is just not worth it. Yes,\n> it'll be harder for users to migrate to encryption, but adding complexity\n> elsewhere in the system to get an inferior result isn't worth it.\n>\n\n From my read, XTS (which I'd see as inferior to authenticated encryption,\nbut better than some other options) could use LSN as an IV without leakage\nconcerns, perhaps mixing in the BlockNumber as well. If we are going to\nallow multiple encryption types, I think we may need to consider that needs\nfor IVs may be different, so this may need to be something that is\nselectable per encryption type.\n\nI am unclear how much of a requirement this is, but seems like having a\ndesign supporting this to be pluggable—even if a static lookup table\ninternally for encryption type, block length, IV source, etc—seems the most\nfuture proof if we had to retire an encryption method or prevent creation\nof specific methods, say.\n\n\n> > Another approach would be a separate fork, but that then has a number of\n> > downsides too- every write has to touch that too, and a single page of\n> > nonces would cover a pretty large number of pages also.\n>\n> Yea, the costs of doing so is nontrivial. If you were trying to implement\n> encryption on the smgr level - which I doubt you should but am not certain\n> about - my suggestion would be to interleave pages with metadata like\n> nonces\n> and AEAD with the data pages. I.e. one metadata page would be followed by\n> (BLCKSZ - SizeOfPageHeaderData) / (sizeof(nonce) + sizeof(AEAD))\n> pages containing actual relation data. That way you still get decent\n> locality\n> during scans and writes.\n>\n\nHmm, this is actually an interesting idea, I will think about this a bit.\n\n\n> Relation forks were a mistake, we shouldn't use them in more places.\n>\n>\n> I think it'd be much better if we also encrypted forks, rather than just\n> the\n> main fork...\n>\n\nI believe the existing code should just work by modifying\nthe PageNeedsToBeEncrypted macro; I will test that and see if anything\nblows up.\n\nDavid\n\nOn Tue, Nov 7, 2023 at 5:49 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-11-06 09:56:37 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > I still am quite quite unconvinced that using the LSN as a nonce is a good\n> > design decision.\n> \n> This is a really important part of the overall path to moving this\n> forward, so I wanted to jump to it and have a specific discussion\n> around this. I agree that there are downsides to using the LSN, some of\n> which we could possibly address (eg: include the timeline ID in the IV),\n> but others that would be harder to deal with.\n\n> The question then is- what's the alternative?\n> \n> One approach would be to change the page format to include space for an\n> explicit nonce. I don't see the community accepting such a new page\n> format as the only format we support though as that would mean no\n> pg_upgrade support along with wasted space if TDE isn't being used.\n\nRight.Hmm, if we /were/ to introduce some sort of page format change, Couldn't that be a use case for modifying the pd_version field? Could v4 pages be read in and written out as v5 pages with different interpretations? \n> Ideally, we'd instead be able to support multiple page formats where\n> users could decide when they create their cluster what features they\n> want- and luckily we even have such an effort underway with patches\n> posted for review [1].\n\nI think there are some details wrong with that patch - IMO the existing macros\nshould just continue to work as-is and instead the places that want the more\nnarrow definition should be moved to the new macros and it changes places that\nshould continue to use compile time constants - but it doesn't seem like a\nfundamentally bad idea to me. I certainly like it much better than making the\npage size runtime configurable.There had been some discussion about this WRT renaming macros and the like (constants, etc)—I think a new pass eliminating the variable blocksize pieces and seeing if we can minimize churn here is worthwhile, will take a look and see what the minimally-viable set of changes is here. \n(I'll try to reply with the above points to [1])\n\n\n> Certainly, with the base page-special-feature patch, we could have an option\n> for users to choose that they want a better nonce than the LSN, or we could\n> bundle that assumption in with, say, the authenticated-encryption feature\n> (if you want authenticated encryption, then you want more from the\n> encryption system than the basics, and therefore we presume you also want a\n> better nonce than the LSN).\n\nI don't think we should support using the LSN as a nonce if we have an\nalternative. The cost and complexity overhead is just not worth it. Yes,\nit'll be harder for users to migrate to encryption, but adding complexity\nelsewhere in the system to get an inferior result isn't worth it.From my read, XTS (which I'd see as inferior to authenticated encryption, but better than some other options) could use LSN as an IV without leakage concerns, perhaps mixing in the BlockNumber as well. If we are going to allow multiple encryption types, I think we may need to consider that needs for IVs may be different, so this may need to be something that is selectable per encryption type. I am unclear how much of a requirement this is, but seems like having a design supporting this to be pluggable—even if a static lookup table internally for encryption type, block length, IV source, etc—seems the most future proof if we had to retire an encryption method or prevent creation of specific methods, say. > Another approach would be a separate fork, but that then has a number of\n> downsides too- every write has to touch that too, and a single page of\n> nonces would cover a pretty large number of pages also.\n\nYea, the costs of doing so is nontrivial. If you were trying to implement\nencryption on the smgr level - which I doubt you should but am not certain\nabout - my suggestion would be to interleave pages with metadata like nonces\nand AEAD with the data pages. I.e. one metadata page would be followed by\n (BLCKSZ - SizeOfPageHeaderData) / (sizeof(nonce) + sizeof(AEAD))\npages containing actual relation data. That way you still get decent locality\nduring scans and writes.Hmm, this is actually an interesting idea, I will think about this a bit. \nRelation forks were a mistake, we shouldn't use them in more places.\n\n\nI think it'd be much better if we also encrypted forks, rather than just the\nmain fork...I believe the existing code should just work by modifying the PageNeedsToBeEncrypted macro; I will test that and see if anything blows up.David",
"msg_date": "Wed, 8 Nov 2023 17:01:38 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "Hi,\r\n\r\nI was re-reading the patches here and there was one thing I didn't understand.\r\n\r\nThere are provisions for a separation of data encryption keys for primary and replica I see, and these share a single WAL key.\r\n\r\nBut if I am setting up a replica from the primary, and the primary is already encrypted, then do these forceably share the same data encrypting keys? Is there a need to have (possibly in a follow-up patch) an ability to decrypt and re-encrypt in pg_basebackup (which would need access to both keys) or is this handled already and I just missed it?\r\n\r\nBest Wishes,\r\nChris Travers",
"msg_date": "Sun, 17 Dec 2023 06:30:50 +0000",
"msg_from": "Chris Travers <chris.travers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "On Sun, Dec 17, 2023 at 06:30:50AM +0000, Chris Travers wrote:\n> Hi,\n> \n> I was re-reading the patches here and there was one thing I didn't understand.\n> \n> There are provisions for a separation of data encryption keys for primary and replica I see, and these share a single WAL key.\n> \n> But if I am setting up a replica from the primary, and the primary is already encrypted, then do these forceably share the same data encrypting keys? Is there a need to have (possibly in a follow-up patch) an ability to decrypt and re-encrypt in pg_basebackup (which would need access to both keys) or is this handled already and I just missed it?\n\nYes, decrypt and re-encrypt in pg_basebackup would be necessary, or in\nthe actual protocol stream.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 26 Dec 2023 13:55:20 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/3985/\n[2] https://cirrus-ci.com/task/5498215743619072\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:17:21 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
},
{
"msg_contents": "On Mon, 22 Jan 2024 at 11:47, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 20:47:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Moving forward with TDE [PATCH v3]"
}
] |
[
{
"msg_contents": "Hi,\nWhen I was looking at src/backend/optimizer/util/restrictinfo.c, I found a\ntypo in one of the comments.\n\nI also took the chance to simplify the code a little bit.\n\nPlease take a look at the patch.\n\nThanks",
"msg_date": "Mon, 24 Oct 2022 10:19:23 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 12:19 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> When I was looking at src/backend/optimizer/util/restrictinfo.c, I found\na typo in one of the comments.\n\nUsing \"t\" as an abbreviation for \"true\" was probably intentional, so not a\ntypo. There is no doubt what the behavior is.\n\n> I also took the chance to simplify the code a little bit.\n\nIt's perfectly clear and simple now, even if it doesn't win at \"code golf\".\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Oct 25, 2022 at 12:19 AM Zhihong Yu <zyu@yugabyte.com> wrote:>> Hi,> When I was looking at src/backend/optimizer/util/restrictinfo.c, I found a typo in one of the comments.Using \"t\" as an abbreviation for \"true\" was probably intentional, so not a typo. There is no doubt what the behavior is.> I also took the chance to simplify the code a little bit.It's perfectly clear and simple now, even if it doesn't win at \"code golf\".--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 Oct 2022 09:05:10 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 10:05 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n>\n> On Tue, Oct 25, 2022 at 12:19 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > When I was looking at src/backend/optimizer/util/restrictinfo.c, I found\n> a typo in one of the comments.\n>\n> Using \"t\" as an abbreviation for \"true\" was probably intentional, so not a\n> typo. There is no doubt what the behavior is.\n>\n> > I also took the chance to simplify the code a little bit.\n>\n> It's perfectly clear and simple now, even if it doesn't win at \"code golf\".\n>\n\nAgree with your point. Do you think we can further make the one-line\nfunction a macro or an inline function in the .h file? I think this\nfunction is called quite frequently during planning, so maybe doing that\nwould bring a little bit of efficiency.\n\nThanks\nRichard\n\nOn Tue, Oct 25, 2022 at 10:05 AM John Naylor <john.naylor@enterprisedb.com> wrote:On Tue, Oct 25, 2022 at 12:19 AM Zhihong Yu <zyu@yugabyte.com> wrote:>> Hi,> When I was looking at src/backend/optimizer/util/restrictinfo.c, I found a typo in one of the comments.Using \"t\" as an abbreviation for \"true\" was probably intentional, so not a typo. There is no doubt what the behavior is.> I also took the chance to simplify the code a little bit.It's perfectly clear and simple now, even if it doesn't win at \"code golf\". Agree with your point. Do you think we can further make the one-linefunction a macro or an inline function in the .h file? I think thisfunction is called quite frequently during planning, so maybe doing thatwould bring a little bit of efficiency.ThanksRichard",
"msg_date": "Tue, 25 Oct 2022 10:48:00 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "\nOn Tue, 25 Oct 2022 at 10:48, Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Oct 25, 2022 at 10:05 AM John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n>\n>>\n>> On Tue, Oct 25, 2022 at 12:19 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>> >\n>> > Hi,\n>> > When I was looking at src/backend/optimizer/util/restrictinfo.c, I found\n>> a typo in one of the comments.\n>>\n>> Using \"t\" as an abbreviation for \"true\" was probably intentional, so not a\n>> typo. There is no doubt what the behavior is.\n>>\n>> > I also took the chance to simplify the code a little bit.\n>>\n>> It's perfectly clear and simple now, even if it doesn't win at \"code golf\".\n>>\n>\n> Agree with your point. Do you think we can further make the one-line\n> function a macro or an inline function in the .h file? I think this\n> function is called quite frequently during planning, so maybe doing that\n> would bring a little bit of efficiency.\n>\n\n+1, same goes for restriction_is_securely_promotable.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n\n",
"msg_date": "Tue, 25 Oct 2022 10:58:09 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 7:58 PM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Tue, 25 Oct 2022 at 10:48, Richard Guo <guofenglinux@gmail.com> wrote:\n> > On Tue, Oct 25, 2022 at 10:05 AM John Naylor <\n> john.naylor@enterprisedb.com>\n> > wrote:\n> >\n> >>\n> >> On Tue, Oct 25, 2022 at 12:19 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >> >\n> >> > Hi,\n> >> > When I was looking at src/backend/optimizer/util/restrictinfo.c, I\n> found\n> >> a typo in one of the comments.\n> >>\n> >> Using \"t\" as an abbreviation for \"true\" was probably intentional, so\n> not a\n> >> typo. There is no doubt what the behavior is.\n> >>\n> >> > I also took the chance to simplify the code a little bit.\n> >>\n> >> It's perfectly clear and simple now, even if it doesn't win at \"code\n> golf\".\n> >>\n> >\n> > Agree with your point. Do you think we can further make the one-line\n> > function a macro or an inline function in the .h file? I think this\n> > function is called quite frequently during planning, so maybe doing that\n> > would bring a little bit of efficiency.\n> >\n>\n> +1, same goes for restriction_is_securely_promotable.\n>\n> Hi,\nThanks for the comments.\n\nPlease take a look at patch v2.",
"msg_date": "Mon, 24 Oct 2022 20:07:45 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, 25 Oct 2022 at 11:07, Zhihong Yu <zyu@yugabyte.com> wrote:\n> Please take a look at patch v2.\n\nMaybe we should define those functions in headers. See patch v3.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 25 Oct 2022 11:46:10 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 11:46 AM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Tue, 25 Oct 2022 at 11:07, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > Please take a look at patch v2.\n>\n> Maybe we should define those functions in headers. See patch v3.\n\n\nYes, putting them in .h file is better to me. For the v3 patch, we can\ndo the same one-line trick for restriction_is_securely_promotable.\n\nThanks\nRichard\n\nOn Tue, Oct 25, 2022 at 11:46 AM Japin Li <japinli@hotmail.com> wrote:\nOn Tue, 25 Oct 2022 at 11:07, Zhihong Yu <zyu@yugabyte.com> wrote:\n> Please take a look at patch v2.\n\nMaybe we should define those functions in headers. See patch v3. Yes, putting them in .h file is better to me. For the v3 patch, we cando the same one-line trick for restriction_is_securely_promotable.ThanksRichard",
"msg_date": "Tue, 25 Oct 2022 12:01:50 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, 25 Oct 2022 at 12:01, Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Oct 25, 2022 at 11:46 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>>\n>> On Tue, 25 Oct 2022 at 11:07, Zhihong Yu <zyu@yugabyte.com> wrote:\n>> > Please take a look at patch v2.\n>>\n>> Maybe we should define those functions in headers. See patch v3.\n>\n>\n> Yes, putting them in .h file is better to me. For the v3 patch, we can\n> do the same one-line trick for restriction_is_securely_promotable.\n>\n\nFixed. Please consider the v4 for further review.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 25 Oct 2022 13:40:23 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 9:48 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Tue, Oct 25, 2022 at 10:05 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>>\n>> It's perfectly clear and simple now, even if it doesn't win at \"code\ngolf\".\n>\n>\n> Agree with your point. Do you think we can further make the one-line\n> function a macro or an inline function in the .h file? I think this\n> function is called quite frequently during planning, so maybe doing that\n> would bring a little bit of efficiency.\n\nMy point was misunderstood, which is: I don't think we need to do anything\nat all here if the goal was purely about aesthetics.\n\nIf the goal has now changed to efficiency, I have no opinion about that yet\nsince no evidence has been presented.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Oct 25, 2022 at 9:48 AM Richard Guo <guofenglinux@gmail.com> wrote:>>> On Tue, Oct 25, 2022 at 10:05 AM John Naylor <john.naylor@enterprisedb.com> wrote:>>>> It's perfectly clear and simple now, even if it doesn't win at \"code golf\".>> > Agree with your point. Do you think we can further make the one-line> function a macro or an inline function in the .h file? I think this> function is called quite frequently during planning, so maybe doing that> would bring a little bit of efficiency.My point was misunderstood, which is: I don't think we need to do anything at all here if the goal was purely about aesthetics.If the goal has now changed to efficiency, I have no opinion about that yet since no evidence has been presented.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 Oct 2022 13:25:32 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On 2022-Oct-25, Richard Guo wrote:\n\n> Agree with your point. Do you think we can further make the one-line\n> function a macro or an inline function in the .h file?\n\nWe can, but should we?\n\n> I think this function is called quite frequently during planning, so\n> maybe doing that would bring a little bit of efficiency.\n\nYou'd need to measure it and show some gains.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)\n\n\n",
"msg_date": "Tue, 25 Oct 2022 09:37:12 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 2:25 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n>\n> On Tue, Oct 25, 2022 at 9:48 AM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >\n> >\n> > On Tue, Oct 25, 2022 at 10:05 AM John Naylor <\n> john.naylor@enterprisedb.com> wrote:\n> >>\n> >> It's perfectly clear and simple now, even if it doesn't win at \"code\n> golf\".\n> >\n> >\n> > Agree with your point. Do you think we can further make the one-line\n> > function a macro or an inline function in the .h file? I think this\n> > function is called quite frequently during planning, so maybe doing that\n> > would bring a little bit of efficiency.\n>\n> My point was misunderstood, which is: I don't think we need to do anything\n> at all here if the goal was purely about aesthetics.\n>\n> If the goal has now changed to efficiency, I have no opinion about that\n> yet since no evidence has been presented.\n>\n\nNow I think I've got your point. Sorry for the misread.\n\nYour concern makes sense. When talking about efficiency we'd better\nattach some concrete proof, such as benchmark tests.\n\nThanks\nRichard\n\nOn Tue, Oct 25, 2022 at 2:25 PM John Naylor <john.naylor@enterprisedb.com> wrote:On Tue, Oct 25, 2022 at 9:48 AM Richard Guo <guofenglinux@gmail.com> wrote:>>> On Tue, Oct 25, 2022 at 10:05 AM John Naylor <john.naylor@enterprisedb.com> wrote:>>>> It's perfectly clear and simple now, even if it doesn't win at \"code golf\".>> > Agree with your point. Do you think we can further make the one-line> function a macro or an inline function in the .h file? I think this> function is called quite frequently during planning, so maybe doing that> would bring a little bit of efficiency.My point was misunderstood, which is: I don't think we need to do anything at all here if the goal was purely about aesthetics.If the goal has now changed to efficiency, I have no opinion about that yet since no evidence has been presented. Now I think I've got your point. Sorry for the misread.Your concern makes sense. When talking about efficiency we'd betterattach some concrete proof, such as benchmark tests.ThanksRichard",
"msg_date": "Tue, 25 Oct 2022 15:52:30 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 3:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Oct-25, Richard Guo wrote:\n>\n> > Agree with your point. Do you think we can further make the one-line\n> > function a macro or an inline function in the .h file?\n>\n> We can, but should we?\n>\n> > I think this function is called quite frequently during planning, so\n> > maybe doing that would bring a little bit of efficiency.\n>\n> You'd need to measure it and show some gains.\n\n\nYeah, that is what has to be done to make it happen.\n\nThanks\nRichard\n\nOn Tue, Oct 25, 2022 at 3:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Oct-25, Richard Guo wrote:\n\n> Agree with your point. Do you think we can further make the one-line\n> function a macro or an inline function in the .h file?\n\nWe can, but should we?\n\n> I think this function is called quite frequently during planning, so\n> maybe doing that would bring a little bit of efficiency.\n\nYou'd need to measure it and show some gains. Yeah, that is what has to be done to make it happen.ThanksRichard",
"msg_date": "Tue, 25 Oct 2022 15:56:33 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fixing typo in comment for restriction_is_or_clause"
}
] |
[
{
"msg_contents": "Hi -hackers,\n\nAn additional piece that I am working on for improving infra for TDE\nfeatures is allowing the storage of additional per-page data. Rather\nthan hard-code the idea of a specific struct, this is utilizing a new,\nmore dynamic structure to associate page offsets with a particular\nfeature that may-or-may-not be present for a given cluster. I am\ncalling this generic structure a PageFeature/PageFeatureSet (better\nnames welcome), which is defined for a cluster at initdb/bootstrap\ntime, and reserves a given amount of trailing space on the Page which\nis then parceled out to the consumers of said space.\n\nWhile the immediate need that this feature fills is storage of\nencryption tags for XTS-based encryption on the pages themselves, this\ncan also be used for any optional features; as an example I have\nimplemented expanded checksum support (both 32- and 64-bit), as well\nas a self-description \"wasted space\" feature, which just allocates\ntrailing space from the page (obviously intended as illustration\nonly).\n\nThere are 6 commits in this series:\n\n0001 - adds `reserved_page_space` global, making various size\ncalculations and limits dynamic, adjusting access methods to offset\nspecial space, and ensuring that we can safely reserve allocated space\nfrom the end of pages.\n\n0002 - test suite stability fixes - the change in number of tuples per\npage means that we had some assumptions about the order from tests\nthat now break\n\n0003 - the \"PageFeatures\" commit, the meat of this feature (see\nfollowing description)\n\n0004 - page_checksum32 feature - store the full 32-bit checksum across\nthe existing pd_checksum field as well as 2 bytes from\nreserved_page_space. This is more of a demo of what could be done\nhere than a practical feature.\n\n0005 - wasted space PageFeature - just use up space. An additional\nfeature we can turn on/off to see how multiple features interact.\nOnly for illustration.\n\n0006 - 64-bit checksums - fully allocated from reserved_page_space.\nUsing an MIT-licensed 64-bit checksum, but if we determined we'd want\nto do this we'd probably roll our own.\n\n From the commit message for PageFeatures:\n\nPage features are a standardized way of assigning and using dynamic\nspace usage from the tail end of\na disk page. These features are set at cluster init time (so\nconfigured via `initdb` and\ninitialized via the bootstrap process) and affect all disk pages.\n\nA PageFeatureSet is effectively a bitflag of all configured features,\neach of which has a fixed\nsize. If not using any PageFeatures, the storage overhead of this is 0.\n\nRather than using a variable location struct, an implementation of a\nPageFeature is responsible for\nan offset and a length in the page. The current API returns only a\npointer to the page location for\nthe implementation to manage, and no further checks are done to ensure\nthat only the expected memory\nis accessed.\n\nAccess to the underlying memory is synonymous with determining whether\na given cluster is using an\nunderlying PageFeature, so code paths can do something like:\n\n char *loc;\n\n if ((loc = ClusterGetPageFeatureOffset(page, PF_MY_FEATURE_ID)))\n {\n // ipso facto this feature is enabled in this cluster *and* we\nknow the memory address\n ...\n }\n\nSince this is direct memory access to the underlying Page, ensure the\nbuffer is pinned. Explicitly\nlocking (assuming you stay in your lane) should only need to guard\nagainst access from other\nbackends of this type if using shared buffers, so will be use-case dependent.\n\nThis does have a runtime overhead due to moving some offset\ncalculations from compile time to\nruntime. It is thought that the utility of this feature will outweigh\nthe costs here.\n\nCandidates for page features include 32-bit or 64-bit checksums,\nencryption tags, or additional\nper-page metadata.\n\nWhile we are not currently getting rid of the pd_checksum field, this\nmechanism could be used to\nfree up that 16 bits for some other purpose. One such purpose might be\nto mirror the cluster-wise\nPageFeatureSet, currently also a uint16, which would mean the entirety\nof this scheme could be\nreflected in a given page, opening up per-relation or even per-page\nsetting/metadata here. (We'd\npresumably need to snag a pd_flags bit to interpret pd_checksum that\nway, but it would be an\ninteresting use.)\n\nDiscussion is welcome and encouraged!\n\nThanks,\n\nDavid",
"msg_date": "Mon, 24 Oct 2022 12:55:53 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "[PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi,\n\nOn Mon, Oct 24, 2022 at 12:55:53PM -0500, David Christensen wrote:\n>\n> Explicitly\n> locking (assuming you stay in your lane) should only need to guard\n> against access from other\n> backends of this type if using shared buffers, so will be use-case dependent.\n\nI'm not sure what you mean here?\n\n> This does have a runtime overhead due to moving some offset\n> calculations from compile time to\n> runtime. It is thought that the utility of this feature will outweigh\n> the costs here.\n\nHave you done some benchmarking to give an idea of how much overhead we're\ntalking about?\n\n> Candidates for page features include 32-bit or 64-bit checksums,\n> encryption tags, or additional\n> per-page metadata.\n>\n> While we are not currently getting rid of the pd_checksum field, this\n> mechanism could be used to\n> free up that 16 bits for some other purpose.\n\nIIUC there's a hard requirement of initdb-time initialization, as there's\notherwise no guarantee that you will find enough free space in each page at\nruntime. It seems like a very hard requirement for a full replacement of the\ncurrent checksum approach (even if I agree that the current implementation\nlimitations are far from ideal), especially since there's no technical reason\nthat would prevent us from dynamically enabling data-checksums without doing\nall the work when the cluster is down.\n\n\n",
"msg_date": "Tue, 25 Oct 2022 09:55:59 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "> > Explicitly\n> > locking (assuming you stay in your lane) should only need to guard\n> > against access from other\n> > backends of this type if using shared buffers, so will be use-case dependent.\n>\n> I'm not sure what you mean here?\n\nI'm mainly pointing out that the specific code that manages this\nfeature is the only one who has to worry about modifying said page\nregion.\n\n> > This does have a runtime overhead due to moving some offset\n> > calculations from compile time to\n> > runtime. It is thought that the utility of this feature will outweigh\n> > the costs here.\n>\n> Have you done some benchmarking to give an idea of how much overhead we're\n> talking about?\n\nNot yet, but I am going to work on this. I suspect the current code\ncould be improved, but will try to get some sort of measurement of the\nadditional overhead.\n\n> > Candidates for page features include 32-bit or 64-bit checksums,\n> > encryption tags, or additional\n> > per-page metadata.\n> >\n> > While we are not currently getting rid of the pd_checksum field, this\n> > mechanism could be used to\n> > free up that 16 bits for some other purpose.\n>\n> IIUC there's a hard requirement of initdb-time initialization, as there's\n> otherwise no guarantee that you will find enough free space in each page at\n> runtime. It seems like a very hard requirement for a full replacement of the\n> current checksum approach (even if I agree that the current implementation\n> limitations are far from ideal), especially since there's no technical reason\n> that would prevent us from dynamically enabling data-checksums without doing\n> all the work when the cluster is down.\n\nAs implemented, that is correct; we are currently assuming this\nspecific feature mechanism is set at initdb time only. Checksums are\nnot the primary motivation here, but were something that I could use\nfor an immediate illustration of the feature. That said, presumably\nyou could define a way to set the features per-relation (say with a\ntemplate field in pg_class) which would propagate to a relation on\nrewrite, so there could be ways to handle things incrementally, were\nthis an overall goal.\n\nThanks for looking,\n\nDavid\n\n\n",
"msg_date": "Tue, 25 Oct 2022 09:55:21 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi\n\nOn Mon, 24 Oct 2022, 19:56 David Christensen, <\ndavid.christensen@crunchydata.com> wrote:\n>\n> Discussion is welcome and encouraged!\n\nDid you read the related thread with related discussion from last June,\n\"Re: better page-level checksums\" [0]? In that I argued that space at the\nend of a page is already allocated for the AM, and that reserving variable\nspace at the end of the page for non-AM usage is wasting the AM's\nperformance potential.\n\nApart from that: Is this variable-sized 'metadata' associated with smgr\ninfrastructure only, or is it also available for AM features? If not; then\nthis is a strong -1. The amount of tasks smgr needs to do on a page is\ngenerally much less than the amount of tasks an AM needs to do; so in my\nview the AM has priority in prime page real estate, not smgr or related\ninfrastructure.\n\nre: PageFeatures\nI'm not sure I understand the goal, nor the reasoning. Shouldn't this be\npart of the storage manager (smgr) implementation / can't this be part of\nthe smgr of the relation?\n\nre: use of pd_checksum\nI mentioned this in the above-mentioned thread too, in [1], that we could\nuse pd_checksum as an extra area marker for this storage-specific data,\nwhich would be located between pd_upper and pd_special.\n\nRe: patch contents\n\n0001:\n>+ specialSize = MAXALIGN(specialSize) + reserved_page_size;\n\nThis needs to be aligned, so MAXALIGN(specialSize + reserved_page_size), or\nan assertion that reserved_page_size is MAXALIGNED, would be better.\n\n> PageValidateSpecialPointer(Page page)\n> {\n> Assert(page);\n> - Assert(((PageHeader) page)->pd_special <= BLCKSZ);\n> + Assert((((PageHeader) page)->pd_special - reserved_page_size) <=\nBLCKSZ);\n\nThis check is incorrect. With your code it would allow pd_special past the\nend of the block. If you want to put the reserved_space_size effectively\ninside the special area, this check should instead be:\n\n+ Assert(((PageHeader) page)->pd_special <= (BLCKSZ -\nreserved_page_size));\n\nOr, equally valid\n\n+ Assert((((PageHeader) page)->pd_special + reserved_page_size) <=\nBLCKSZ);\n\n> + * +-------------+-----+------------+-----------------+\n> + * | ... tuple2 tuple1 | \"special space\" | \"reserved\" |\n> + * +-------------------+------------+-----------------+\n\nCould you fix the table display if / when you revise the patchset? It seems\nto me that the corners don't line up with the column borders.\n\n0002:\n> Make the output of \"select_views\" test stable\n> Changing the reserved_page_size has resulted in non-stable results for\nthis test.\n\nThis makes sense, what kind of instability are we talking about? Are there\ndifferent results for runs with the same binary, or is this across\ncompilations?\n\n0003 and up were not yet reviewed in depth.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoaCeQ2b-BVgVfF8go8zFoceDjJq9w4AFQX7u6Acfdn2uA%40mail.gmail.com#90badc63e568a89a76f51fc95f07ffaf\n[1]\nhttps://postgr.es/m/CAEze2Wi5wYinU7nYxyKe_C0DRc6uWYa8ivn5%3Dzg63nKtHBnn8A%40mail.gmail.com\n\nHi\n\nOn Mon, 24 Oct 2022, 19:56 David Christensen, <david.christensen@crunchydata.com> wrote:\n>\n> Discussion is welcome and encouraged!\n\nDid you read the related thread with related discussion from last June, \"Re: better page-level checksums\" [0]? In that I argued that space at the end of a page is already allocated for the AM, and that reserving variable space at the end of the page for non-AM usage is wasting the AM's performance potential.\n\nApart from that: Is this variable-sized 'metadata' associated with smgr infrastructure only, or is it also available for AM features? If not; then this is a strong -1. The amount of tasks smgr needs to do on a page is generally much less than the amount of tasks an AM needs to do; so in my view the AM has priority in prime page real estate, not smgr or related infrastructure.\n\nre: PageFeatures\nI'm not sure I understand the goal, nor the reasoning. Shouldn't this be part of the storage manager (smgr) implementation / can't this be part of the smgr of the relation?\n\nre: use of pd_checksum\nI mentioned this in the above-mentioned thread too, in [1], that we could use pd_checksum as an extra area marker for this storage-specific data, which would be located between pd_upper and pd_special.\n\nRe: patch contents\n\n0001:\n>+ specialSize = MAXALIGN(specialSize) + reserved_page_size;\n\nThis needs to be aligned, so MAXALIGN(specialSize + reserved_page_size), or an assertion that reserved_page_size is MAXALIGNED, would be better.\n\n> PageValidateSpecialPointer(Page page)\n> {\n> Assert(page);\n> - Assert(((PageHeader) page)->pd_special <= BLCKSZ);\n> + Assert((((PageHeader) page)->pd_special - reserved_page_size) <= BLCKSZ);\n\nThis check is incorrect. With your code it would allow pd_special past the end of the block. If you want to put the reserved_space_size effectively inside the special area, this check should instead be:\n\n+ Assert(((PageHeader) page)->pd_special <= (BLCKSZ - reserved_page_size));\n\nOr, equally valid\n\n+ Assert((((PageHeader) page)->pd_special + reserved_page_size) <= BLCKSZ);\n\n> + * +-------------+-----+------------+-----------------+\n> + * | ... tuple2 tuple1 | \"special space\" | \"reserved\" |\n> + * +-------------------+------------+-----------------+\n\nCould you fix the table display if / when you revise the patchset? It seems to me that the corners don't line up with the column borders.\n\n0002:\n> Make the output of \"select_views\" test stable\n> Changing the reserved_page_size has resulted in non-stable results for this test.\n\nThis makes sense, what kind of instability are we talking about? Are there different results for runs with the same binary, or is this across compilations?\n\n0003 and up were not yet reviewed in depth.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n[0] https://www.postgresql.org/message-id/flat/CA%2BTgmoaCeQ2b-BVgVfF8go8zFoceDjJq9w4AFQX7u6Acfdn2uA%40mail.gmail.com#90badc63e568a89a76f51fc95f07ffaf\n[1] https://postgr.es/m/CAEze2Wi5wYinU7nYxyKe_C0DRc6uWYa8ivn5%3Dzg63nKtHBnn8A%40mail.gmail.com",
"msg_date": "Thu, 27 Oct 2022 15:17:00 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi Matthias,\n\n> Did you read the related thread with related discussion from last June, \"Re: better page-level checksums\" [0]? In that I argued that space at the end of a page is already allocated for the AM, and that reserving variable space at the end of the page for non-AM usage is wasting the AM's performance potential.\n\nYes, I had read parts of that thread among others, but have given it a\nre-read. I can see the point you're making here, and agree that if we\ncan allocate between pd_special and pd_upper that could make sense. I\nam a little unclear as to what performance impacts for the AM there\nwould be if this additional space were ahead or behind the page\nspecial area; it seems like if this is something that needs to live on\nthe page *somewhere* just being aligned correctly would be sufficient\nfrom the AM's standpoint. Considering that I am trying to make this\nhave zero storage impact if these features are not active, the impact\non a cluster with no additional features would be moot from a storage\nperspective, no?\n\n> Apart from that: Is this variable-sized 'metadata' associated with smgr infrastructure only, or is it also available for AM features? If not; then this is a strong -1. The amount of tasks smgr needs to do on a page is generally much less than the amount of tasks an AM needs to do; so in my view the AM has priority in prime page real estate, not smgr or related infrastructure.\n\nI will confess to a slightly wobbly understanding of the delineation\nof responsibility here. I was under the impression that by modifying\nany consumer of PageHeaderData this would be sufficient to cover all\nAMs for the types of cluster-wide options we'd be concerned about (say\nextended checksums, multiple page encryption schemes, or other\nper-page information we haven't yet anticipated). Reading smgr/README\nand the various access/*/README has not made the distinction clear to\nme yet.\n\n> re: PageFeatures\n> I'm not sure I understand the goal, nor the reasoning. Shouldn't this be part of the storage manager (smgr) implementation / can't this be part of the smgr of the relation?\n\nFor at least the feature cases I'm anticipating, this would apply to\nany disk page that may have user data, set (at least initially) at\ninitdb time, so should apply to any pages in the cluster, regardless\nof AM.\n\n> re: use of pd_checksum\n> I mentioned this in the above-mentioned thread too, in [1], that we could use pd_checksum as an extra area marker for this storage-specific data, which would be located between pd_upper and pd_special.\n\nI do think that we could indeed use this as an additional in-page\npointer, but at least for this version was keeping things\nbackwards-compatible. Peter G (I think) also made some good points\nabout how to include the various status bits on the page somehow in\nterms of making a page completely self-contained.\n\n> Re: patch contents\n>\n> 0001:\n> >+ specialSize = MAXALIGN(specialSize) + reserved_page_size;\n>\n> This needs to be aligned, so MAXALIGN(specialSize + reserved_page_size), or an assertion that reserved_page_size is MAXALIGNED, would be better.\n\nIt is currently aligned via the space calculation return value but\nagree that folding it into an assert or reworking it explicitly is\nclearer.\n\n> > PageValidateSpecialPointer(Page page)\n> > {\n> > Assert(page);\n> > - Assert(((PageHeader) page)->pd_special <= BLCKSZ);\n> > + Assert((((PageHeader) page)->pd_special - reserved_page_size) <= BLCKSZ);\n>\n> This check is incorrect. With your code it would allow pd_special past the end of the block. If you want to put the reserved_space_size effectively inside the special area, this check should instead be:\n>\n> + Assert(((PageHeader) page)->pd_special <= (BLCKSZ - reserved_page_size));\n>\n> Or, equally valid\n>\n> + Assert((((PageHeader) page)->pd_special + reserved_page_size) <= BLCKSZ);\n\nYup, I think I inverted my logic there; thanks.\n\n> > + * +-------------+-----+------------+-----------------+\n> > + * | ... tuple2 tuple1 | \"special space\" | \"reserved\" |\n> > + * +-------------------+------------+-----------------+\n>\n> Could you fix the table display if / when you revise the patchset? It seems to me that the corners don't line up with the column borders.\n\nSure thing.\n\n> 0002:\n> > Make the output of \"select_views\" test stable\n> > Changing the reserved_page_size has resulted in non-stable results for this test.\n>\n> This makes sense, what kind of instability are we talking about? Are there different results for runs with the same binary, or is this across compilations?\n\nWhen running with the same compilation/initdb settings, the test\nresults are stable, but differ depending what options you chose, so\n`make installcheck` output will fail when testing a cluster with\ndifferent options vs upstream HEAD without these patches, etc.\n\n> 0003 and up were not yet reviewed in depth.\n\nThanks, I appreciate the feedback so far.\n\n\n",
"msg_date": "Fri, 28 Oct 2022 17:25:33 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Sat, 29 Oct 2022 at 00:25, David Christensen\n<david.christensen@crunchydata.com> wrote:\n>\n> Hi Matthias,\n>\n> > Did you read the related thread with related discussion from last June, \"Re: better page-level checksums\" [0]? In that I argued that space at the end of a page is already allocated for the AM, and that reserving variable space at the end of the page for non-AM usage is wasting the AM's performance potential.\n>\n> Yes, I had read parts of that thread among others, but have given it a\n> re-read. I can see the point you're making here, and agree that if we\n> can allocate between pd_special and pd_upper that could make sense. I\n> am a little unclear as to what performance impacts for the AM there\n> would be if this additional space were ahead or behind the page\n> special area; it seems like if this is something that needs to live on\n> the page *somewhere* just being aligned correctly would be sufficient\n> from the AM's standpoint.\n\nIt would be sufficient, but it is definitely suboptimal. See [0] as a\npatch that is being held back by putting stuff behind the special\narea.\n\nI don't really care much about the storage layout on-disk, but I do\ncare that AMs have efficient access to their data. For the page\nheader, line pointers, and special area, that is currently guaranteed\nby the current page layout. However, for the special area, that\ncurrently guaranteed offset of (BLCKSZ -\nMAXALIGN(sizeof(IndexOpaque))) will get broken as there would be more\nspace in the special area than the AM would be expecting. Right now,\nour index AMs are doing pointer chasing during special area lookups\nfor no good reason, but with the patch it would be required. I don't\nlike that at all.\n\nBecause I understand that it is a requirement to store this reserved\nspace in a fixed place on the on-disk page (you must know where the\nchecksum is at static places on the page, otherwise you'd potentially\nmis-validate a page) but that requirement is not there for in-memory\nstorage. I think it's a small price to pay to swap the fields around\nduring R/W operations - the largest size of special area is currently\n24 bytes, and the proposals I've seen for this extra storage area\nwould not need it to be actually filled with data whilst the page is\nbeing used by the AM (checksum could be zeroed in in-memory\noperations, and it'd get set during writeback; same with all other\nfields I can imagine the storage system using).\n\n> Considering that I am trying to make this\n> have zero storage impact if these features are not active, the impact\n> on a cluster with no additional features would be moot from a storage\n> perspective, no?\n\nThe issue is that I'd like to eliminate the redirection from the page\nheader in the hot path. Currently, we can do that, and pd_special\nwould be little more than a hint to the smgr and pd_linp code that\nthat area is special and reserved for this access method's private\ndata, so that it is not freed. If you stick something extra in there,\nit's not special for the AM's private data, and the AM won't be able\nto use pd_special for similar uses as pd_linp+pd_lower. I'd rather\nhave the storage system use it's own not-special area; choreographed\nby e.g. a reuse of pd_checksum for one more page offset. Swapping the\nfields around between on-disk and in-memory doesn't need to be an\nissue, as special areas are rarely very large.\n\nEvery index type we support utilizes the special area. Wouldn't those\nin-memory operations have priority on this useful space, as opposed to\na storage system that maybe will be used in new clusters, and even\nthen only during R/W operations to disk (each at most once for N\nmemory operations)?\n\n> > Apart from that: Is this variable-sized 'metadata' associated with smgr infrastructure only, or is it also available for AM features? If not; then this is a strong -1. The amount of tasks smgr needs to do on a page is generally much less than the amount of tasks an AM needs to do; so in my view the AM has priority in prime page real estate, not smgr or related infrastructure.\n>\n> I will confess to a slightly wobbly understanding of the delineation\n> of responsibility here. I was under the impression that by modifying\n> any consumer of PageHeaderData this would be sufficient to cover all\n> AMs for the types of cluster-wide options we'd be concerned about (say\n> extended checksums, multiple page encryption schemes, or other\n> per-page information we haven't yet anticipated). Reading smgr/README\n> and the various access/*/README has not made the distinction clear to\n> me yet.\n\npd_special has (historically) been reserved for access methods'\npage-level private data. If you add to this area, shouldn't that be\nspace that the AM should be able to hook into as well? Or are all\nthose features limited to the storage system only; i.e. the storage\nsystem decides what's best for the AM's page handling w.r.t. physical\nstorage?\n\n> > re: PageFeatures\n> > I'm not sure I understand the goal, nor the reasoning. Shouldn't this be part of the storage manager (smgr) implementation / can't this be part of the smgr of the relation?\n>\n> For at least the feature cases I'm anticipating, this would apply to\n> any disk page that may have user data, set (at least initially) at\n> initdb time, so should apply to any pages in the cluster, regardless\n> of AM.\n\nOK, so having a storage manager for each supported set of features is\nnot planned for this. Understood.\n\n> > re: use of pd_checksum\n> > I mentioned this in the above-mentioned thread too, in [1], that we could use pd_checksum as an extra area marker for this storage-specific data, which would be located between pd_upper and pd_special.\n>\n> I do think that we could indeed use this as an additional in-page\n> pointer, but at least for this version was keeping things\n> backwards-compatible. Peter G (I think) also made some good points\n> about how to include the various status bits on the page somehow in\n> terms of making a page completely self-contained.\n\nI think that adding page header bits would suffice for backwards\ncompatibility if we'd want to reuse pd_checksum. A new\nPD_CHECKSUM_REUSED_FOR_STORAGE would suffice here; it would be unset\nin normal (pre-patch, or without these fancy new features) clusters.\n\nKind regards,\n\nMatthias van de Meent\n\nPS. sorry for the rant. I hope my arguments are clear why I dislike\nthe storage area being placed behind the special area in memory.\n\n[0] https://commitfest.postgresql.org/40/3543/\n\n\n",
"msg_date": "Mon, 31 Oct 2022 20:42:17 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Per some offline discussion with Stephen and incorporating some of the\nfeedback I've gotten I'm including the following changes/revisions:\n\n1. Change the signature of any macros that rely on a dynamic component\nto look like a function so you can more easily determine in-code\nwhether something is truly a constant/compile time calculation or a\nruntime one.\n\n2. We use a new page flag for whether \"extended page features\" are\nenabled on the given page. If this is set then we look for the 1-byte\ntrailer with the bitflag of the number of features. We allow space for\n7 page features and are reserving the final hi bit for future\nuse/change of interpretation to accommodate more.\n\n3. Consolidate the extended checksums into a 56-bit checksum that\nimmediately precedes the 1-byte flag. Choice of 64-bit checksum is\narbitrary just based on some MIT-licensed code I found, so just\nconsidering this proof of concept, not necessarily promoting that\nspecific calculation. (I think I included some additional checksum\nvariants from the earlier revision for ease of testing various\napproaches.)\n\n4. Ensure the whole area is MAXALIGN and fixed a few bugs that were\npointed out in this thread.\n\nPatches are:\n\n1. make select_views stable, prerequisite for anything that is messing\nwith tuples on page sizes\n\n2. add reserved_page_size handling and rework existing code to account\nfor this additional space usage\n\n3. main PageFeatures-related code; introduce that abstraction layer,\nalong with the trailing byte on the page with the enabled features for\nthis specific page. We also add an additional param to PageInit()\nwith the page features active on this page; currently all call sites\nare using the cluster-wide cluster_page_features as the parameter, so\nall pages share what is stored in the control file based on initdb\noptions. However, routines which query page features look on the\nactual page itself, so in fact we are able to piecemeal at the\npage/relation level if we so desire, or turn off for specific types of\npages, say. Also includes the additional pd_flags bit to enable that\ninterpretation.\n\n4. Actual extended checksums PageFeature. Rather than two separate\nimplementations as in the previous patch series, we are using 56 bits\nof a 64-bit checksum, stored as the high 7 bytes of the final 8 in the\npage where this is enabled.\n\n5. wasted_space PageFeature just to demo multiple features in play.\n\nThanks,\n\nDavid",
"msg_date": "Tue, 8 Nov 2022 10:17:56 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Looking into some CF bot failures which didn't show up locally. Will\nsend a v3 when resolved.\n\n\n",
"msg_date": "Tue, 8 Nov 2022 14:07:34 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "So here is a v3 here, incorporating additional bug fixes and some\ndesign revisions. I have narrowed this down to 3 patches, fixing the\nbugs that were leading to the instability of the specific test file so\ndropping that as well as removing the useless POC \"wasted space\".\n\nThe following pieces are left:\n\n0001 - adjust the codebase to utilize the \"reserved_page_space\"\nvariable for all offsets rather than assuming compile-time constants.\nThis allows us to effectively allocate a fixed chunk of storage from\nthe end of the page and have everything still work on this cluster.\n0002 - add the Page Feature abstraction. This allows you to utilize\nthis chunk of storage, as well as query for feature use at the page\nlevel.\n0003 - the first page feature, 64-bit encryption (soon to be\nrenumbered when GCM storage for TDE is introduced, though the two\nfeatures are designed to be incompatible). This includes an\narbitrarily found 64-bit checksum, so we probably will need to write\nour own or ensure that we have something license-compatible.\n\nThis is rebased and current as-of-today and passes all CI tests, so\nshould be in a good place to start looking at.\n\nBest,\n\nDavid",
"msg_date": "Tue, 24 Jan 2023 18:38:50 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Refreshing this with HEAD as of today, v4.",
"msg_date": "Tue, 9 May 2023 17:08:26 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Greetings,\n\n* David Christensen (david.christensen@crunchydata.com) wrote:\n> Refreshing this with HEAD as of today, v4.\n\nThanks for updating this!\n\n> Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> \n> This space is reserved for extended data on the Page structure which will be ultimately used for\n> encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> ControlFile features.\n> \n> No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> we will require logical replication to move data into a cluster with different settings here.\n\nThis initial patch, at least, does maintain pg_upgrade as the\nreserved_page_size (maybe not a great name?) is set to 0, right?\nBasically this is just introducing the concept of a reserved_page_size\nand adjusting all of the code that currently uses BLCKSZ or\nPageGetPageSize() to account for this extra space.\n\nLooking at the changes to bufpage.h, in particular ...\n\n> diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h\n\n> @@ -19,6 +19,14 @@\n> #include \"storage/item.h\"\n> #include \"storage/off.h\"\n> \n> +extern PGDLLIMPORT int reserved_page_size;\n> +\n> +#define SizeOfPageReservedSpace() reserved_page_size\n> +#define MaxSizeOfPageReservedSpace 0\n> +\n> +/* strict upper bound on the amount of space occupied we have reserved on\n> + * pages in this cluster */\n\nThis will eventually be calculated based on what features are supported\nconcurrently?\n\n> @@ -36,10 +44,10 @@\n> * |\t\t\t v pd_upper\t\t\t\t\t\t\t |\n> * +-------------+------------------------------------+\n> * |\t\t\t | tupleN ... |\n> - * +-------------+------------------+-----------------+\n> - * |\t ... tuple3 tuple2 tuple1 | \"special space\" |\n> - * +--------------------------------+-----------------+\n> - *\t\t\t\t\t\t\t\t\t^ pd_special\n> + * +-------------+-----+------------+----+------------+\n> + * | ... tuple2 tuple1 | \"special space\" | \"reserved\" |\n> + * +-------------------+------------+----+------------+\n> + *\t\t\t\t\t ^ pd_special ^ reserved_page_space\n\nRight, adds a dynamic amount of space 'post-special area'.\n\n> @@ -73,6 +81,8 @@\n> * stored as the page trailer. an access method should always\n> * initialize its pages with PageInit and then set its own opaque\n> * fields.\n> + *\n> + * XXX - update more comments here about reserved_page_space\n> */\n\nWould be good to do. ;)\n\n> @@ -325,7 +335,7 @@ static inline void\n> PageValidateSpecialPointer(Page page)\n> {\n> \tAssert(page);\n> -\tAssert(((PageHeader) page)->pd_special <= BLCKSZ);\n> +\tAssert((((PageHeader) page)->pd_special + reserved_page_size) <= BLCKSZ);\n> \tAssert(((PageHeader) page)->pd_special >= SizeOfPageHeaderData);\n> }\n\nThis is just one usage ... but seems like maybe we should be using\nPageGetPageSize() here instead of BLCKSZ, and that more-or-less\nthroughout? Nearly everywhere we're using BLCKSZ today to give us that\ncompile-time advantage of a fixed block size is going to lose that\nadvantage anyway thanks to reserved_page_size being run-time. Now, one\nup-side to this is that it'd also get us closer to being able to support\ndynamic block sizes concurrently which would be quite interesting. That\nis, a special tablespace with a 32KB block size while the rest are the\ntraditional 8KB. This would likely require multiple shared buffer\npools, of course...\n\n> diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c\n> index 9a302ddc30..a93cd9df9f 100644\n> --- a/src/backend/storage/page/bufpage.c\n> +++ b/src/backend/storage/page/bufpage.c\n> @@ -26,6 +26,8 @@\n> /* GUC variable */\n> bool\t\tignore_checksum_failure = false;\n> \n> +int\t\t\treserved_page_size = 0; /* how much page space to reserve for extended unencrypted metadata */\n> +\n> \n> /* ----------------------------------------------------------------\n> *\t\t\t\t\t\tPage support functions\n> @@ -43,7 +45,7 @@ PageInit(Page page, Size pageSize, Size specialSize)\n> {\n> \tPageHeader\tp = (PageHeader) page;\n> \n> -\tspecialSize = MAXALIGN(specialSize);\n> +\tspecialSize = MAXALIGN(specialSize) + reserved_page_size;\n\nRather than make it part of specialSize, I would think we'd be better\noff just treating them independently. Eg, the later pd_upper setting\nwould be done by:\n\np->pd_upper = pageSize - specialSize - reserved_page_size;\n\netc.\n\n> @@ -186,7 +188,7 @@ PageIsVerifiedExtended(Page page, BlockNumber blkno, int flags)\n> *\tone that is both unused and deallocated.\n> *\n> *\tIf flag PAI_IS_HEAP is set, we enforce that there can't be more than\n> - *\tMaxHeapTuplesPerPage line pointers on the page.\n> + *\tMaxHeapTuplesPerPage() line pointers on the page.\n\nMaking MaxHeapTuplesPerPage() runtime dynamic is a requirement for\nsupporting multiple page sizes concurrently ... but I'm not sure it's\nactually required for the reserved_page_size idea as currently\nconsidered. The reason is that with 8K or larger pages, the amount of\nspace we're already throwing away is at least 20 bytes, if I did my math\nright. If we constrain reserved_page_size to be 20 bytes or less, as I\nbelieve we're currently thinking we won't need that much, then we could\nperhaps keep MaxHeapTuplesPerPage as a compile-time constant.\n\nOn the other hand, to the extent that we want to consider having\nvariable page sizes in the future, perhaps we do want to change this.\nIf so, the approach broadly looks reasonable to me, but I'd suggest we\nmake that a separate patch from the introduction of reserved_page_size.\n\n> @@ -211,7 +213,7 @@ PageAddItemExtended(Page page,\n> \tif (phdr->pd_lower < SizeOfPageHeaderData ||\n> \t\tphdr->pd_lower > phdr->pd_upper ||\n> \t\tphdr->pd_upper > phdr->pd_special ||\n> -\t\tphdr->pd_special > BLCKSZ)\n> +\t\tphdr->pd_special + reserved_page_size > BLCKSZ)\n> \t\tereport(PANIC,\n> \t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n> \t\t\t\t errmsg(\"corrupted page pointers: lower = %u, upper = %u, special = %u\",\n\nProbably should add reserved_page_size to that errmsg output? Also,\nthis check of pointers seems to be done multiple times- maybe it should\nbe moved into a #define or similar?\n\n> @@ -723,7 +725,7 @@ PageRepairFragmentation(Page page)\n> \tif (pd_lower < SizeOfPageHeaderData ||\n> \t\tpd_lower > pd_upper ||\n> \t\tpd_upper > pd_special ||\n> -\t\tpd_special > BLCKSZ ||\n> +\t\tpd_special + reserved_page_size > BLCKSZ ||\n> \t\tpd_special != MAXALIGN(pd_special))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\nThis ends up being the same as above ...\n\n> @@ -1066,7 +1068,7 @@ PageIndexTupleDelete(Page page, OffsetNumber offnum)\n> \tif (phdr->pd_lower < SizeOfPageHeaderData ||\n> \t\tphdr->pd_lower > phdr->pd_upper ||\n> \t\tphdr->pd_upper > phdr->pd_special ||\n> -\t\tphdr->pd_special > BLCKSZ ||\n> +\t\tphdr->pd_special + reserved_page_size > BLCKSZ ||\n> \t\tphdr->pd_special != MAXALIGN(phdr->pd_special))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\nAnd here ...\n\n> @@ -1201,7 +1203,7 @@ PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems)\n> \tif (pd_lower < SizeOfPageHeaderData ||\n> \t\tpd_lower > pd_upper ||\n> \t\tpd_upper > pd_special ||\n> -\t\tpd_special > BLCKSZ ||\n> +\t\tpd_special + reserved_page_size > BLCKSZ ||\n> \t\tpd_special != MAXALIGN(pd_special))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\nAnd here ...\n\n> @@ -1307,7 +1309,7 @@ PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offnum)\n> \tif (phdr->pd_lower < SizeOfPageHeaderData ||\n> \t\tphdr->pd_lower > phdr->pd_upper ||\n> \t\tphdr->pd_upper > phdr->pd_special ||\n> -\t\tphdr->pd_special > BLCKSZ ||\n> +\t\tphdr->pd_special + reserved_page_size > BLCKSZ ||\n> \t\tphdr->pd_special != MAXALIGN(phdr->pd_special))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\nAnd here ...\n\n> @@ -1419,7 +1421,7 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,\n> \tif (phdr->pd_lower < SizeOfPageHeaderData ||\n> \t\tphdr->pd_lower > phdr->pd_upper ||\n> \t\tphdr->pd_upper > phdr->pd_special ||\n> -\t\tphdr->pd_special > BLCKSZ ||\n> +\t\tphdr->pd_special + reserved_page_size > BLCKSZ ||\n> \t\tphdr->pd_special != MAXALIGN(phdr->pd_special))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\nAnd here ...\n\n> diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\n> index 6979aff727..060c4ab3e3 100644\n> --- a/contrib/amcheck/verify_nbtree.c\n> +++ b/contrib/amcheck/verify_nbtree.c\n> @@ -489,12 +489,12 @@ bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace,\n> \t\t/*\n> \t\t * Size Bloom filter based on estimated number of tuples in index,\n> \t\t * while conservatively assuming that each block must contain at least\n> -\t\t * MaxTIDsPerBTreePage / 3 \"plain\" tuples -- see\n> +\t\t * MaxTIDsPerBTreePage() / 3 \"plain\" tuples -- see\n> \t\t * bt_posting_plain_tuple() for definition, and details of how posting\n> \t\t * list tuples are handled.\n> \t\t */\n> \t\ttotal_pages = RelationGetNumberOfBlocks(rel);\n> -\t\ttotal_elems = Max(total_pages * (MaxTIDsPerBTreePage / 3),\n> +\t\ttotal_elems = Max(total_pages * (MaxTIDsPerBTreePage() / 3),\n> \t\t\t\t\t\t (int64) state->rel->rd_rel->reltuples);\n> \t\t/* Generate a random seed to avoid repetition */\n> \t\tseed = pg_prng_uint64(&pg_global_prng_state);\n\nMaking MaxTIDsPerBTreePage dynamic looks to be required as it doesn't\nend up with any 'leftover' space, from what I can tell. Again, though,\nperhaps this should be split out as an independent patch from the rest.\nThat is- we can change the higher-level functions to be dynamic in the\ninitial patches, and then eventually we'll get down to making the\nlower-level functions dynamic.\n\n> diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h\n> index efdf9415d1..8ebabdd7ee 100644\n> --- a/contrib/bloom/bloom.h\n> +++ b/contrib/bloom/bloom.h\n> @@ -131,7 +131,7 @@ typedef struct BloomMetaPageData\n> #define BLOOM_MAGICK_NUMBER (0xDBAC0DED)\n> \n> /* Number of blocks numbers fit in BloomMetaPageData */\n> -#define BloomMetaBlockN\t\t(sizeof(FreeBlockNumberArray) / sizeof(BlockNumber))\n> +#define BloomMetaBlockN()\t\t((sizeof(FreeBlockNumberArray) - SizeOfPageReservedSpace())/ sizeof(BlockNumber))\n> \n> #define BloomPageGetMeta(page)\t((BloomMetaPageData *) PageGetContents(page))\n> \n> @@ -151,6 +151,7 @@ typedef struct BloomState\n> \n> #define BloomPageGetFreeSpace(state, page) \\\n> \t(BLCKSZ - MAXALIGN(SizeOfPageHeaderData) \\\n> +\t\t- SizeOfPageReservedSpace()\t\t\t\t\t\t\t\t \\\n> \t\t- BloomPageGetMaxOffset(page) * (state)->sizeOfBloomTuple \\\n> \t\t- MAXALIGN(sizeof(BloomPageOpaqueData)))\n\nThis formulation (or something close to it) tends to happen quite a bit:\n\n(BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - SizeOfPageReservedSpace() ...\n\nThis is basically asking for \"amount of usable space\" where the\nresulting 'usable space' either includes line pointers and tuples or\nsimilar, or doesn't. Perhaps we should break this down into two\npatches- one which provides a function to return usable space on a page,\nand then the patch to add reserved_page_size can simply adjust that\ninstead of changing the very, very many places we have this forumlation.\n\n> diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c\n> index d935ed8fbd..d3d74a9d28 100644\n> --- a/contrib/bloom/blutils.c\n> +++ b/contrib/bloom/blutils.c\n> @@ -430,10 +430,10 @@ BloomFillMetapage(Relation index, Page metaPage)\n> \t */\n> \tBloomInitPage(metaPage, BLOOM_META);\n> \tmetadata = BloomPageGetMeta(metaPage);\n> -\tmemset(metadata, 0, sizeof(BloomMetaPageData));\n> +\tmemset(metadata, 0, sizeof(BloomMetaPageData) - SizeOfPageReservedSpace());\n\nThis doesn't seem quite right? The reserved space is off at the end of\nthe page and this is 0'ing the space immediately after the page header,\nif I'm following correctly, and only to the size of BloomMetaPageData...\n\n> \tmetadata->magickNumber = BLOOM_MAGICK_NUMBER;\n> \tmetadata->opts = *opts;\n> -\t((PageHeader) metaPage)->pd_lower += sizeof(BloomMetaPageData);\n> +\t((PageHeader) metaPage)->pd_lower += sizeof(BloomMetaPageData) - SizeOfPageReservedSpace();\n\nNot quite following what's going on here either.\n\n> diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c\n> @@ -116,7 +116,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,\n> \t\t */\n> \t\tif (BloomPageGetMaxOffset(page) != 0 &&\n> \t\t\tBloomPageGetFreeSpace(&state, page) >= state.sizeOfBloomTuple &&\n> -\t\t\tcountPage < BloomMetaBlockN)\n> +\t\t\tcountPage < BloomMetaBlockN())\n> \t\t\tnotFullPage[countPage++] = blkno;\n\nLooks to be another opportunity to have a separate patch making this\nchange first before actually changing the lower-level #define's,\n\n> diff --git a/src/backend/access/brin/brin_tuple.c b/src/backend/access/brin/brin_tuple.c\n> @@ -217,7 +217,7 @@ brin_form_tuple(BrinDesc *brdesc, BlockNumber blkno, BrinMemTuple *tuple,\n> \t\t\t * datatype, try to compress it in-line.\n> \t\t\t */\n> \t\t\tif (!VARATT_IS_EXTENDED(DatumGetPointer(value)) &&\n> -\t\t\t\tVARSIZE(DatumGetPointer(value)) > TOAST_INDEX_TARGET &&\n> +\t\t\t\tVARSIZE(DatumGetPointer(value)) > TOAST_INDEX_TARGET() &&\n> \t\t\t\t(atttype->typstorage == TYPSTORAGE_EXTENDED ||\n> \t\t\t\t atttype->typstorage == TYPSTORAGE_MAIN))\n> \t\t\t{\n\nProbably could be another patch but also if we're going to change\nTOAST_INDEX_TARGET to be a function we should probably not have it named\nin all-CAPS.\n\n> diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c\n> @@ -535,7 +535,7 @@ dataBeginPlaceToPageLeaf(GinBtree btree, Buffer buf, GinBtreeStack *stack,\n> \t\t * a single byte, and we can use all the free space on the old page as\n> \t\t * well as the new page. For simplicity, ignore segment overhead etc.\n> \t\t */\n> -\t\tmaxitems = Min(maxitems, freespace + GinDataPageMaxDataSize);\n> +\t\tmaxitems = Min(maxitems, freespace + GinDataPageMaxDataSize());\n> \t}\n> \telse\n> \t{\n\nDitto.\n\n> diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c\n> @@ -38,8 +38,8 @@\n> /* GUC parameter */\n> int\t\t\tgin_pending_list_limit = 0;\n> \n> -#define GIN_PAGE_FREESIZE \\\n> -\t( BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - MAXALIGN(sizeof(GinPageOpaqueData)) )\n> +#define GIN_PAGE_FREESIZE() \\\n> +\t( BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - MAXALIGN(sizeof(GinPageOpaqueData)) - SizeOfPageReservedSpace() )\n\nAnother case of BLCKSZ - MAXALIGN(SizeOfPageHeaderData) -\nSizeOfPageReservedSpace() ...\n\n> @@ -450,7 +450,7 @@ ginHeapTupleFastInsert(GinState *ginstate, GinTupleCollector *collector)\n> \t * ginInsertCleanup() should not be called inside our CRIT_SECTION.\n> \t */\n> \tcleanupSize = GinGetPendingListCleanupSize(index);\n> -\tif (metadata->nPendingPages * GIN_PAGE_FREESIZE > cleanupSize * 1024L)\n> +\tif (metadata->nPendingPages * GIN_PAGE_FREESIZE() > cleanupSize * 1024L)\n> \t\tneedCleanup = true;\n\nAlso shouldn't be all-CAPS.\n\n> diff --git a/src/backend/access/nbtree/nbtsplitloc.c b/src/backend/access/nbtree/nbtsplitloc.c\n> index 43b67893d9..5babbb457a 100644\n> --- a/src/backend/access/nbtree/nbtsplitloc.c\n> +++ b/src/backend/access/nbtree/nbtsplitloc.c\n> @@ -156,7 +156,7 @@ _bt_findsplitloc(Relation rel,\n> \n> \t/* Total free space available on a btree page, after fixed overhead */\n> \tleftspace = rightspace =\n> -\t\tPageGetPageSize(origpage) - SizeOfPageHeaderData -\n> +\t\tPageGetPageSize(origpage) - SizeOfPageHeaderData - SizeOfPageReservedSpace() -\n> \t\tMAXALIGN(sizeof(BTPageOpaqueData));\n\nAlso here ... though a bit interesting that this uses PageGetPageSize()\ninstead of BLCKSZ.\n\n> diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\n> index 011ec18015..022b5eee4e 100644\n> --- a/src/backend/utils/init/globals.c\n> +++ b/src/backend/utils/init/globals.c\n> @@ -154,3 +154,4 @@ int64\t\tVacuumPageDirty = 0;\n> \n> int\t\t\tVacuumCostBalance = 0;\t/* working state for vacuum */\n> bool\t\tVacuumCostActive = false;\n> +\n\nUnnecessary whitespace hunk ?\n\nThanks!\n\nStephen",
"msg_date": "Fri, 12 May 2023 19:47:56 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Fri, May 12, 2023 at 7:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * David Christensen (david.christensen@crunchydata.com) wrote:\n> > Refreshing this with HEAD as of today, v4.\n>\n> Thanks for updating this!\n\nThanks for the patience in my response here.\n\n> > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> >\n> > This space is reserved for extended data on the Page structure which will be ultimately used for\n> > encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> > the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> > ControlFile features.\n> >\n> > No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> > we will require logical replication to move data into a cluster with different settings here.\n>\n> This initial patch, at least, does maintain pg_upgrade as the\n> reserved_page_size (maybe not a great name?) is set to 0, right?\n> Basically this is just introducing the concept of a reserved_page_size\n> and adjusting all of the code that currently uses BLCKSZ or\n> PageGetPageSize() to account for this extra space.\n\nCorrect; a reserved_page_size of 0 would be the same page format as\ncurrently exists, so you could use pg_upgrade with no page features\nand be binary compatible with existing clusters.\n\n> Looking at the changes to bufpage.h, in particular ...\n>\n> > diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h\n>\n> > @@ -19,6 +19,14 @@\n> > #include \"storage/item.h\"\n> > #include \"storage/off.h\"\n> >\n> > +extern PGDLLIMPORT int reserved_page_size;\n> > +\n> > +#define SizeOfPageReservedSpace() reserved_page_size\n> > +#define MaxSizeOfPageReservedSpace 0\n> > +\n> > +/* strict upper bound on the amount of space occupied we have reserved on\n> > + * pages in this cluster */\n>\n> This will eventually be calculated based on what features are supported\n> concurrently?\n\nCorrect; these are fleshed out in later patches.\n\n> > @@ -36,10 +44,10 @@\n> > * | v pd_upper |\n> > * +-------------+------------------------------------+\n> > * | | tupleN ... |\n> > - * +-------------+------------------+-----------------+\n> > - * | ... tuple3 tuple2 tuple1 | \"special space\" |\n> > - * +--------------------------------+-----------------+\n> > - * ^ pd_special\n> > + * +-------------+-----+------------+----+------------+\n> > + * | ... tuple2 tuple1 | \"special space\" | \"reserved\" |\n> > + * +-------------------+------------+----+------------+\n> > + * ^ pd_special ^ reserved_page_space\n>\n> Right, adds a dynamic amount of space 'post-special area'.\n\nDynamic as in \"fixed at initdb time\" instead of compile time. However,\nthings are coded in such a way that the page feature bitmap is stored\non a given page, so different pages could have different\nreserved_page_size depending on use case/code path. (Basically\npreserving future flexibility while minimizing code changes here.) We\ncould utilize different features depending on what type of page it is,\nsay, or have different relations or tablespaces with different page\nfeature defaults.\n\n> > @@ -73,6 +81,8 @@\n> > * stored as the page trailer. an access method should always\n> > * initialize its pages with PageInit and then set its own opaque\n> > * fields.\n> > + *\n> > + * XXX - update more comments here about reserved_page_space\n> > */\n>\n> Would be good to do. ;)\n\nNext revision... :D\n\n> > @@ -325,7 +335,7 @@ static inline void\n> > PageValidateSpecialPointer(Page page)\n> > {\n> > Assert(page);\n> > - Assert(((PageHeader) page)->pd_special <= BLCKSZ);\n> > + Assert((((PageHeader) page)->pd_special + reserved_page_size) <= BLCKSZ);\n> > Assert(((PageHeader) page)->pd_special >= SizeOfPageHeaderData);\n> > }\n>\n> This is just one usage ... but seems like maybe we should be using\n> PageGetPageSize() here instead of BLCKSZ, and that more-or-less\n> throughout? Nearly everywhere we're using BLCKSZ today to give us that\n> compile-time advantage of a fixed block size is going to lose that\n> advantage anyway thanks to reserved_page_size being run-time. Now, one\n> up-side to this is that it'd also get us closer to being able to support\n> dynamic block sizes concurrently which would be quite interesting. That\n> is, a special tablespace with a 32KB block size while the rest are the\n> traditional 8KB. This would likely require multiple shared buffer\n> pools, of course...\n\nI think multiple shared-buffer pools is a ways off; but sure, this\nwould support this sort of use case as well. I am working on a new\npatch for this series (probably the first one in the series) which\nwill actually just abstract away all existing compile-time usages of\nBLCKSZ. This will be a start in that direction and also make the\nreserved_page_size patch a bit more reasonable to review.\n\n> > diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c\n> > index 9a302ddc30..a93cd9df9f 100644\n> > --- a/src/backend/storage/page/bufpage.c\n> > +++ b/src/backend/storage/page/bufpage.c\n> > @@ -26,6 +26,8 @@\n> > /* GUC variable */\n> > bool ignore_checksum_failure = false;\n> >\n> > +int reserved_page_size = 0; /* how much page space to reserve for extended unencrypted metadata */\n> > +\n> >\n> > /* ----------------------------------------------------------------\n> > * Page support functions\n> > @@ -43,7 +45,7 @@ PageInit(Page page, Size pageSize, Size specialSize)\n> > {\n> > PageHeader p = (PageHeader) page;\n> >\n> > - specialSize = MAXALIGN(specialSize);\n> > + specialSize = MAXALIGN(specialSize) + reserved_page_size;\n>\n> Rather than make it part of specialSize, I would think we'd be better\n> off just treating them independently. Eg, the later pd_upper setting\n> would be done by:\n>\n> p->pd_upper = pageSize - specialSize - reserved_page_size;\n>\n> etc.\n\nI can see that there's a mild readability benefit, but really the\neffect is local to PageInit(), so ¯\\_(ツ)_/¯... happy to make that\nchange though.\n\n> > @@ -186,7 +188,7 @@ PageIsVerifiedExtended(Page page, BlockNumber blkno, int flags)\n> > * one that is both unused and deallocated.\n> > *\n> > * If flag PAI_IS_HEAP is set, we enforce that there can't be more than\n> > - * MaxHeapTuplesPerPage line pointers on the page.\n> > + * MaxHeapTuplesPerPage() line pointers on the page.\n>\n> Making MaxHeapTuplesPerPage() runtime dynamic is a requirement for\n> supporting multiple page sizes concurrently ... but I'm not sure it's\n> actually required for the reserved_page_size idea as currently\n> considered. The reason is that with 8K or larger pages, the amount of\n> space we're already throwing away is at least 20 bytes, if I did my math\n> right. If we constrain reserved_page_size to be 20 bytes or less, as I\n> believe we're currently thinking we won't need that much, then we could\n> perhaps keep MaxHeapTuplesPerPage as a compile-time constant.\n\nIn this version we don't have that explicit constraint. In practice I\ndon't know that we have many more than 20 bytes, at least for the\nfirst few features, but I don't think we can count on that forever\ngoing forward. At some point we're going to have to parameterize\nthese, so might as well do it in this pass, since how else would you\nknow that this magic value has been exceeded?\n\n> On the other hand, to the extent that we want to consider having\n> variable page sizes in the future, perhaps we do want to change this.\n> If so, the approach broadly looks reasonable to me, but I'd suggest we\n> make that a separate patch from the introduction of reserved_page_size.\n\nThe variable blocksize patch I'm working on includes some of this, so\nthis will be in the next revision.\n\n> > @@ -211,7 +213,7 @@ PageAddItemExtended(Page page,\n> > if (phdr->pd_lower < SizeOfPageHeaderData ||\n> > phdr->pd_lower > phdr->pd_upper ||\n> > phdr->pd_upper > phdr->pd_special ||\n> > - phdr->pd_special > BLCKSZ)\n> > + phdr->pd_special + reserved_page_size > BLCKSZ)\n> > ereport(PANIC,\n> > (errcode(ERRCODE_DATA_CORRUPTED),\n> > errmsg(\"corrupted page pointers: lower = %u, upper = %u, special = %u\",\n>\n> Probably should add reserved_page_size to that errmsg output? Also,\n> this check of pointers seems to be done multiple times- maybe it should\n> be moved into a #define or similar?\n\nSure, can change; agreed it'd be good to have. I just modified the\nexisting call sites and didn't attempt to change too much else.\n\n[snipped other instances...]\n\n> Making MaxTIDsPerBTreePage dynamic looks to be required as it doesn't\n> end up with any 'leftover' space, from what I can tell. Again, though,\n> perhaps this should be split out as an independent patch from the rest.\n> That is- we can change the higher-level functions to be dynamic in the\n> initial patches, and then eventually we'll get down to making the\n> lower-level functions dynamic.\n\nSame; should be accounted for in the next variable blocksize patch.\nIt does have a cascading effect though, so hard to make the high-level\nfunctions dynamic but not the lower-level ones. What is the benefit in\nthis case for separating those two?\n\n> > diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h\n> > index efdf9415d1..8ebabdd7ee 100644\n> > --- a/contrib/bloom/bloom.h\n> > +++ b/contrib/bloom/bloom.h\n> > @@ -131,7 +131,7 @@ typedef struct BloomMetaPageData\n> > #define BLOOM_MAGICK_NUMBER (0xDBAC0DED)\n> >\n> > /* Number of blocks numbers fit in BloomMetaPageData */\n> > -#define BloomMetaBlockN (sizeof(FreeBlockNumberArray) / sizeof(BlockNumber))\n> > +#define BloomMetaBlockN() ((sizeof(FreeBlockNumberArray) - SizeOfPageReservedSpace())/ sizeof(BlockNumber))\n> >\n> > #define BloomPageGetMeta(page) ((BloomMetaPageData *) PageGetContents(page))\n> >\n> > @@ -151,6 +151,7 @@ typedef struct BloomState\n> >\n> > #define BloomPageGetFreeSpace(state, page) \\\n> > (BLCKSZ - MAXALIGN(SizeOfPageHeaderData) \\\n> > + - SizeOfPageReservedSpace() \\\n> > - BloomPageGetMaxOffset(page) * (state)->sizeOfBloomTuple \\\n> > - MAXALIGN(sizeof(BloomPageOpaqueData)))\n>\n> This formulation (or something close to it) tends to happen quite a bit:\n>\n> (BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - SizeOfPageReservedSpace() ...\n>\n> This is basically asking for \"amount of usable space\" where the\n> resulting 'usable space' either includes line pointers and tuples or\n> similar, or doesn't. Perhaps we should break this down into two\n> patches- one which provides a function to return usable space on a page,\n> and then the patch to add reserved_page_size can simply adjust that\n> instead of changing the very, very many places we have this forumlation.\n\nYeah, I can make this a computed expression; agreed it's pretty common\nto have the usable space on the page so really any AM shouldn't know\nor care about the details of either header or footer. Since we\nalready have PageGetContents() I will probably name it\nPageGetContentsSize(). The AM can own everything from the pointer\nreturned by PageGetContents() through said size, allowing for both the\nheader and reserved_page_size in said computation.\n\n> > diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c\n> > index d935ed8fbd..d3d74a9d28 100644\n> > --- a/contrib/bloom/blutils.c\n> > +++ b/contrib/bloom/blutils.c\n> > @@ -430,10 +430,10 @@ BloomFillMetapage(Relation index, Page metaPage)\n> > */\n> > BloomInitPage(metaPage, BLOOM_META);\n> > metadata = BloomPageGetMeta(metaPage);\n> > - memset(metadata, 0, sizeof(BloomMetaPageData));\n> > + memset(metadata, 0, sizeof(BloomMetaPageData) - SizeOfPageReservedSpace());\n>\n> This doesn't seem quite right? The reserved space is off at the end of\n> the page and this is 0'ing the space immediately after the page header,\n> if I'm following correctly, and only to the size of BloomMetaPageData...\n\nI think you're correct with that analysis. BloomInitPage() would have\n(probably?) had a zero'd page so this underset would have been\nunnoticed in practice, but still good to fix.\n\n> > metadata->magickNumber = BLOOM_MAGICK_NUMBER;\n> > metadata->opts = *opts;\n> > - ((PageHeader) metaPage)->pd_lower += sizeof(BloomMetaPageData);\n> > + ((PageHeader) metaPage)->pd_lower += sizeof(BloomMetaPageData) - SizeOfPageReservedSpace();\n>\n> Not quite following what's going on here either.\n\nHeh, not sure either. Not sure if there was a reason or a mechanical\nreplacement, but will look when I do next revisions.\n\n> > diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c\n> > @@ -116,7 +116,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,\n> > */\n> > if (BloomPageGetMaxOffset(page) != 0 &&\n> > BloomPageGetFreeSpace(&state, page) >= state.sizeOfBloomTuple &&\n> > - countPage < BloomMetaBlockN)\n> > + countPage < BloomMetaBlockN())\n> > notFullPage[countPage++] = blkno;\n>\n> Looks to be another opportunity to have a separate patch making this\n> change first before actually changing the lower-level #define's,\n\n#include <stdpatch/variable-blocksize>\n\n> > diff --git a/src/backend/access/brin/brin_tuple.c b/src/backend/access/brin/brin_tuple.c\n> > @@ -217,7 +217,7 @@ brin_form_tuple(BrinDesc *brdesc, BlockNumber blkno, BrinMemTuple *tuple,\n> > * datatype, try to compress it in-line.\n> > */\n> > if (!VARATT_IS_EXTENDED(DatumGetPointer(value)) &&\n> > - VARSIZE(DatumGetPointer(value)) > TOAST_INDEX_TARGET &&\n> > + VARSIZE(DatumGetPointer(value)) > TOAST_INDEX_TARGET() &&\n> > (atttype->typstorage == TYPSTORAGE_EXTENDED ||\n> > atttype->typstorage == TYPSTORAGE_MAIN))\n> > {\n>\n> Probably could be another patch but also if we're going to change\n> TOAST_INDEX_TARGET to be a function we should probably not have it named\n> in all-CAPS.\n\nOkay, can make those style changes as well; agreed ALLCAPS should be constant.\n\n> > diff --git a/src/backend/access/nbtree/nbtsplitloc.c b/src/backend/access/nbtree/nbtsplitloc.c\n> > index 43b67893d9..5babbb457a 100644\n> > --- a/src/backend/access/nbtree/nbtsplitloc.c\n> > +++ b/src/backend/access/nbtree/nbtsplitloc.c\n> > @@ -156,7 +156,7 @@ _bt_findsplitloc(Relation rel,\n> >\n> > /* Total free space available on a btree page, after fixed overhead */\n> > leftspace = rightspace =\n> > - PageGetPageSize(origpage) - SizeOfPageHeaderData -\n> > + PageGetPageSize(origpage) - SizeOfPageHeaderData - SizeOfPageReservedSpace() -\n> > MAXALIGN(sizeof(BTPageOpaqueData));\n>\n> Also here ... though a bit interesting that this uses PageGetPageSize()\n> instead of BLCKSZ.\n\nYeah, a few little exceptions. Variable blocksize patch introduces\nthose every place it can, and ClusterBlockSize() anywhere it can't.\n\n> > diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\n> > index 011ec18015..022b5eee4e 100644\n> > --- a/src/backend/utils/init/globals.c\n> > +++ b/src/backend/utils/init/globals.c\n> > @@ -154,3 +154,4 @@ int64 VacuumPageDirty = 0;\n> >\n> > int VacuumCostBalance = 0; /* working state for vacuum */\n> > bool VacuumCostActive = false;\n> > +\n>\n> Unnecessary whitespace hunk ?\n\nWill clean up.\n\nThanks for the review,\n\nDavid\n\n\n",
"msg_date": "Tue, 30 May 2023 13:35:37 -0400",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> From: David Christensen <david@pgguru.net>\n> Date: Tue, 9 May 2023 16:56:15 -0500\n> Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n>\n> This space is reserved for extended data on the Page structure which will be ultimately used for\n> encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> ControlFile features.\n>\n> No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> we will require logical replication to move data into a cluster with\n> different settings here.\n\nThe first part of the last paragraph makes it sound like pg_upgrade won't be\nsupported across this commit, rather than just between different settings...\n\nI think as a whole this is not an insane idea. A few comments:\n\n- IMO the patch touches many places it shouldn't need to touch, because of\n essentially renaming a lot of existing macro names to *Limit,\n necessitating modifying a lot of users. I think instead the few places that\n care about the runtime limit should be modified.\n\n As-is the patch would cause a lot of fallout in extensions that just do\n things like defining an on-stack array of Datums or such - even though all\n they'd need is to change the define to the *Limit one.\n\n Even leaving extensions aside, it must makes reviewing (and I'm sure\n maintaining) the patch very tedious.\n\n\n- I'm a bit worried about how the extra special page will be managed - if\n there are multiple features that want to use it, who gets to put their data\n at what offset?\n\n After writing this I saw that 0002 tries to address this - but I don't like\n the design. It introduces runtime overhead that seems likely to be visible.\n\n\n- Checking for features using PageGetFeatureOffset() seems the wrong design to\n me - instead of a branch for some feature being disabled, perfectly\n predictable for the CPU, we need to do an external function call every time\n to figure out that yet, checksums are *still* disabled.\n\n\n- Recomputing offsets every time in PageGetFeatureOffset() seems too\n expensive. The offsets can't change while running as PageGetFeatureOffset()\n have enough information to distinguish between different kinds of relations\n - so why do we need to recompute offsets on every single page? I'd instead\n add a distinct offset variable for each feature.\n\n\n- Modifying every single PageInit() call doesn't make sense to me. That'll\n just create a lot of breakage for - as far as I can tell - no win.\n\n\n- Why is it worth sacrificing space on every page to indicate which features\n were enabled? I think there'd need to be some convincing reasons for\n introducing such overhead.\n\n- Is it really useful to encode the set of features enabled in a cluster with\n a bitmask? That pretty much precludes utilizing extra page space in\n extensions. We could instead just have an extra cluster-wide file that\n defines a mapping of offset to feature.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Nov 2023 16:20:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> > From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> > From: David Christensen <david@pgguru.net>\n> > Date: Tue, 9 May 2023 16:56:15 -0500\n> > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> >\n> > This space is reserved for extended data on the Page structure which will be ultimately used for\n> > encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> > the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> > ControlFile features.\n> >\n> > No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> > we will require logical replication to move data into a cluster with\n> > different settings here.\n> \n> The first part of the last paragraph makes it sound like pg_upgrade won't be\n> supported across this commit, rather than just between different settings...\n> \n> I think as a whole this is not an insane idea. A few comments:\n\nThanks for all the feedback!\n\n> - Why is it worth sacrificing space on every page to indicate which features\n> were enabled? I think there'd need to be some convincing reasons for\n> introducing such overhead.\n\nIn conversations with folks (my memory specifically is a discussion with\nPeter G, added to CC, and my apologies to Peter if I'm misremembering)\nthere was a pretty strong push that a page should be able to 'stand\nalone' and not depend on something else (eg: pg_control, or whatever) to\nprovide info needed be able to interpret the page. For my part, I don't\nhave a particularly strong feeling on that, but that's what lead to this\ndesign.\n\nGetting a consensus on if that's a requirement or not would definitely\nbe really helpful.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 8 Nov 2023 09:04:13 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 8:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> > > From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> > > From: David Christensen <david@pgguru.net>\n> > > Date: Tue, 9 May 2023 16:56:15 -0500\n> > > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> > >\n> > > This space is reserved for extended data on the Page structure which\n> will be ultimately used for\n> > > encrypted data, extended checksums, and potentially other things.\n> This data appears at the end of\n> > > the Page, after any `pd_special` area, and will be calculated at\n> runtime based on specific\n> > > ControlFile features.\n> > >\n> > > No effort is made to ensure this is backwards-compatible with existing\n> clusters for `pg_upgrade`, as\n> > > we will require logical replication to move data into a cluster with\n> > > different settings here.\n> >\n> > The first part of the last paragraph makes it sound like pg_upgrade\n> won't be\n> > supported across this commit, rather than just between different\n> settings...\n>\n\nYeah, that's vague, but you picked up on what I meant.\n\n\n> > I think as a whole this is not an insane idea. A few comments:\n>\n> Thanks for all the feedback!\n>\n> > - Why is it worth sacrificing space on every page to indicate which\n> features\n> > were enabled? I think there'd need to be some convincing reasons for\n> > introducing such overhead.\n>\n> In conversations with folks (my memory specifically is a discussion with\n> Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n> there was a pretty strong push that a page should be able to 'stand\n> alone' and not depend on something else (eg: pg_control, or whatever) to\n> provide info needed be able to interpret the page. For my part, I don't\n> have a particularly strong feeling on that, but that's what lead to this\n> design.\n>\n\nUnsurprisingly, I agree that it's useful to keep these features on the page\nitself; from a forensic standpoint this seems much easier to interpret what\nis happening here, as well it would allow you to have different features on\na given page or type of page depending on need. The initial patch utilizes\npg_control to store the cluster page features, but there's no reason it\ncouldn't be dependent on fork/page type or stored in pg_tablespace to\nutilize different features.\n\nThanks,\n\nDavid\n\nOn Wed, Nov 8, 2023 at 8:04 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> > From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> > From: David Christensen <david@pgguru.net>\n> > Date: Tue, 9 May 2023 16:56:15 -0500\n> > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> >\n> > This space is reserved for extended data on the Page structure which will be ultimately used for\n> > encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> > the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> > ControlFile features.\n> >\n> > No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> > we will require logical replication to move data into a cluster with\n> > different settings here.\n> \n> The first part of the last paragraph makes it sound like pg_upgrade won't be\n> supported across this commit, rather than just between different settings...Yeah, that's vague, but you picked up on what I meant. \n> I think as a whole this is not an insane idea. A few comments:\n\nThanks for all the feedback!\n\n> - Why is it worth sacrificing space on every page to indicate which features\n> were enabled? I think there'd need to be some convincing reasons for\n> introducing such overhead.\n\nIn conversations with folks (my memory specifically is a discussion with\nPeter G, added to CC, and my apologies to Peter if I'm misremembering)\nthere was a pretty strong push that a page should be able to 'stand\nalone' and not depend on something else (eg: pg_control, or whatever) to\nprovide info needed be able to interpret the page. For my part, I don't\nhave a particularly strong feeling on that, but that's what lead to this\ndesign.Unsurprisingly, I agree that it's useful to keep these features on the page itself; from a forensic standpoint this seems much easier to interpret what is happening here, as well it would allow you to have different features on a given page or type of page depending on need. The initial patch utilizes pg_control to store the cluster page features, but there's no reason it couldn't be dependent on fork/page type or stored in pg_tablespace to utilize different features.Thanks,David",
"msg_date": "Wed, 8 Nov 2023 19:55:16 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Nov 8, 2023 at 20:55 David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> On Wed, Nov 8, 2023 at 8:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n>> * Andres Freund (andres@anarazel.de) wrote:\n>> > On 2023-05-09 17:08:26 -0500, David Christensen wrote:\n>> > > From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n>> > > From: David Christensen <david@pgguru.net>\n>> > > Date: Tue, 9 May 2023 16:56:15 -0500\n>> > > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n>> > >\n>> > > This space is reserved for extended data on the Page structure which\n>> will be ultimately used for\n>> > > encrypted data, extended checksums, and potentially other things.\n>> This data appears at the end of\n>> > > the Page, after any `pd_special` area, and will be calculated at\n>> runtime based on specific\n>> > > ControlFile features.\n>> > >\n>> > > No effort is made to ensure this is backwards-compatible with\n>> existing clusters for `pg_upgrade`, as\n>> > > we will require logical replication to move data into a cluster with\n>> > > different settings here.\n>> >\n>> > The first part of the last paragraph makes it sound like pg_upgrade\n>> won't be\n>> > supported across this commit, rather than just between different\n>> settings...\n>>\n>\n> Yeah, that's vague, but you picked up on what I meant.\n>\n>\n>> > I think as a whole this is not an insane idea. A few comments:\n>>\n>> Thanks for all the feedback!\n>>\n>> > - Why is it worth sacrificing space on every page to indicate which\n>> features\n>> > were enabled? I think there'd need to be some convincing reasons for\n>> > introducing such overhead.\n>>\n>> In conversations with folks (my memory specifically is a discussion with\n>> Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n>> there was a pretty strong push that a page should be able to 'stand\n>> alone' and not depend on something else (eg: pg_control, or whatever) to\n>> provide info needed be able to interpret the page. For my part, I don't\n>> have a particularly strong feeling on that, but that's what lead to this\n>> design.\n>>\n>\n> Unsurprisingly, I agree that it's useful to keep these features on the\n> page itself; from a forensic standpoint this seems much easier to interpret\n> what is happening here, as well it would allow you to have different\n> features on a given page or type of page depending on need. The initial\n> patch utilizes pg_control to store the cluster page features, but there's\n> no reason it couldn't be dependent on fork/page type or stored in\n> pg_tablespace to utilize different features.\n>\n\nWhen it comes to authenticated encryption, it’s also the case that it’s\nunclear what value the checksum field has, if any… it’s certainly not\ndirectly needed as a checksum, as the auth tag is much better for the\npurpose of seeing if the page has been changed in some way. It’s also not\nbig enough to serve as an auth tag per NIST guidelines regarding the size\nof the authenticated data vs. the size of the tag. Using it to indicate\nwhat features are enabled on the page seems pretty useful, as David notes.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Nov 8, 2023 at 20:55 David Christensen <david.christensen@crunchydata.com> wrote:On Wed, Nov 8, 2023 at 8:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> > From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> > From: David Christensen <david@pgguru.net>\n> > Date: Tue, 9 May 2023 16:56:15 -0500\n> > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> >\n> > This space is reserved for extended data on the Page structure which will be ultimately used for\n> > encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> > the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> > ControlFile features.\n> >\n> > No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> > we will require logical replication to move data into a cluster with\n> > different settings here.\n> \n> The first part of the last paragraph makes it sound like pg_upgrade won't be\n> supported across this commit, rather than just between different settings...Yeah, that's vague, but you picked up on what I meant. \n> I think as a whole this is not an insane idea. A few comments:\n\nThanks for all the feedback!\n\n> - Why is it worth sacrificing space on every page to indicate which features\n> were enabled? I think there'd need to be some convincing reasons for\n> introducing such overhead.\n\nIn conversations with folks (my memory specifically is a discussion with\nPeter G, added to CC, and my apologies to Peter if I'm misremembering)\nthere was a pretty strong push that a page should be able to 'stand\nalone' and not depend on something else (eg: pg_control, or whatever) to\nprovide info needed be able to interpret the page. For my part, I don't\nhave a particularly strong feeling on that, but that's what lead to this\ndesign.Unsurprisingly, I agree that it's useful to keep these features on the page itself; from a forensic standpoint this seems much easier to interpret what is happening here, as well it would allow you to have different features on a given page or type of page depending on need. The initial patch utilizes pg_control to store the cluster page features, but there's no reason it couldn't be dependent on fork/page type or stored in pg_tablespace to utilize different features.When it comes to authenticated encryption, it’s also the case that it’s unclear what value the checksum field has, if any… it’s certainly not directly needed as a checksum, as the auth tag is much better for the purpose of seeing if the page has been changed in some way. It’s also not big enough to serve as an auth tag per NIST guidelines regarding the size of the authenticated data vs. the size of the tag. Using it to indicate what features are enabled on the page seems pretty useful, as David notes.Thanks,Stephen",
"msg_date": "Wed, 8 Nov 2023 21:05:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 6:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n> In conversations with folks (my memory specifically is a discussion with\n> Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n> there was a pretty strong push that a page should be able to 'stand\n> alone' and not depend on something else (eg: pg_control, or whatever) to\n> provide info needed be able to interpret the page. For my part, I don't\n> have a particularly strong feeling on that, but that's what lead to this\n> design.\n\nThe term that I have used in the past is \"self-contained\". Meaning\ncapable of being decoded more or less as-is, without any metadata, by\ntools like pg_filedump.\n\nAny design in this area should try to make things as easy to debug as\npossible, for the obvious reason: encrypted data that somehow becomes\ncorrupt is bound to be a nightmare to debug. (Besides, we already\nsupport tools like pg_filedump, so this isn't a new principle.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Nov 2023 18:47:56 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 6:20 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> > From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> > From: David Christensen <david@pgguru.net>\n> > Date: Tue, 9 May 2023 16:56:15 -0500\n> > Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n> >\n> > This space is reserved for extended data on the Page structure which\n> will be ultimately used for\n> > encrypted data, extended checksums, and potentially other things. This\n> data appears at the end of\n> > the Page, after any `pd_special` area, and will be calculated at runtime\n> based on specific\n> > ControlFile features.\n> >\n> > No effort is made to ensure this is backwards-compatible with existing\n> clusters for `pg_upgrade`, as\n> > we will require logical replication to move data into a cluster with\n> > different settings here.\n>\n> The first part of the last paragraph makes it sound like pg_upgrade won't\n> be\n> supported across this commit, rather than just between different\n> settings...\n>\n\nThanks for the review.\n\n\n> I think as a whole this is not an insane idea. A few comments:\n>\n> - IMO the patch touches many places it shouldn't need to touch, because of\n> essentially renaming a lot of existing macro names to *Limit,\n> necessitating modifying a lot of users. I think instead the few places\n> that\n> care about the runtime limit should be modified.\n>\n> As-is the patch would cause a lot of fallout in extensions that just do\n> things like defining an on-stack array of Datums or such - even though\n> all\n> they'd need is to change the define to the *Limit one.\n>\n> Even leaving extensions aside, it must makes reviewing (and I'm sure\n> maintaining) the patch very tedious.\n>\n\nYou make a good point, and I think you're right that we could teach the\nplaces that care about runtime vs compile time differences about the\nchanges while leaving other callers alone. The *Limit ones were introduced\nsince we need constant values here from the Calc...() macros, but could try\nkeeping the existing *Limit with the old name and switching things around.\nI suspect there will be the same amount of code churn, but less mechanical.\n\n\n> - I'm a bit worried about how the extra special page will be managed - if\n> there are multiple features that want to use it, who gets to put their\n> data\n> at what offset?\n>\n> After writing this I saw that 0002 tries to address this - but I don't\n> like\n> the design. It introduces runtime overhead that seems likely to be\n> visible.\n>\n\nAgreed this could be optimized.\n\n\n> - Checking for features using PageGetFeatureOffset() seems the wrong\n> design to\n> me - instead of a branch for some feature being disabled, perfectly\n> predictable for the CPU, we need to do an external function call every\n> time\n> to figure out that yet, checksums are *still* disabled.\n>\n\nThis is probably not a supported approach (it felt a little icky), but I'd\nplayed around with const pointers to structs of const elements, where the\ninitial values of a global var was populated early on (so set once and\nnever changed post init), and the compiler didn't complain and things\nseemed to work ok; not sure if this approach might help balance the early\nmutability and constant lookup needs:\n\ntypedef struct PageFeatureOffsets {\n const Size feature0offset;\n const Size feature1offset;\n ...\n} PageFeatureOffsets;\n\nPageFeatureOffsets offsets = {0};\nconst PageFeatureOffsets *exposedOffsets = &offsets;\n\nvoid InitOffsets() {\n *((Size*)&offsets.feature0offset) = ...;\n *((Size*)&offsets.feature1offset) = ...;\n...\n}\n\n- Recomputing offsets every time in PageGetFeatureOffset() seems too\n> expensive. The offsets can't change while running as\n> PageGetFeatureOffset()\n> have enough information to distinguish between different kinds of\n> relations\n>\n\nYes, this was a simple approach for ease of implementation; there is\ncertainly a way to precompute a lookup table from the page feature bitmask\ninto the offsets themselves or otherwise precompute, turn from function\ncall into inline/macro, etc.\n\n\n> - so why do we need to recompute offsets on every single page? I'd\n> instead\n> add a distinct offset variable for each feature.\n>\n\nThis would work iff there is a single page feature set across all pages in\nthe cluster; I'm not sure we don't want more flexibility here.\n\n\n> - Modifying every single PageInit() call doesn't make sense to me. That'll\n> just create a lot of breakage for - as far as I can tell - no win.\n>\n\nThis was a placeholder to allow different features depending on page type;\nto keep things simple for now I just used the same values here, but we\ncould move this inside PageInit() instead (again, assuming single feature\nset per cluster).\n\n\n> - Why is it worth sacrificing space on every page to indicate which\n> features\n> were enabled? I think there'd need to be some convincing reasons for\n> introducing such overhead.\n>\n\nThe point here is if we can use either GCM authtag or stronger checksums\nthen we've gained the ability to authenticate the page contents at the cost\nof reassigning those bits, in a way that would support variable\npermutations of features for different relations or page types, if so\ndesired. A single global setting here both eliminates that possibility as\nwell as requires external data in order to fully interpret pages.\n\n\n> - Is it really useful to encode the set of features enabled in a cluster\n> with\n> a bitmask? That pretty much precludes utilizing extra page space in\n> extensions. We could instead just have an extra cluster-wide file that\n> defines a mapping of offset to feature.\n>\n\nGiven the current design, yes we do need that, which does make it harder to\nallocate/use from an extension. Due to needing to have consistent offsets\nfor a given feature set (however represented on a page), the implementation\nload going forward as-is involves ensuring that a given bit always maps to\nthe same offset in the page regardless of additional features available in\nthe future. So the 0'th bit if enabled would always map to the 8 byte chunk\nat the end of the page, the 1st bit corresponds to some amount of space\nprior to that, etc. I'm not sure how to get that property without some\nsort of bitmap or otherwise indexed operation.\n\nI get what you're saying as far as the more global approach, and while that\ndoes lend itself to some nice properties in terms of extensibility, some of\nthe features (GCM tags in particular) need to be able to control the page\noffset at a consistent location so we can decode the rest of the page\nwithout knowing anything else.\n\nAdditionally, since the reserved space/page features are configured at\ninitdb time I am unclear how a given extension would even be able to stake\na claim here. ...though if we consider this a two-part problem, one of\nspace reservation and one of space usage, that part could be handled via\nallocating more than the minimum in the reserved_page_space and allowing\nunallocated page space to be claimed later via some sort of additional\nfunctions/other hook. That opens up other questions though, tracking\nwhether said space has ever been initialized and what to do when first\naccessing existing/new pages as one example.\n\nBest,\n\nDavid\n\nOn Tue, Nov 7, 2023 at 6:20 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-05-09 17:08:26 -0500, David Christensen wrote:\n> From 965309ea3517fa734c4bc89c144e2031cdf6c0c3 Mon Sep 17 00:00:00 2001\n> From: David Christensen <david@pgguru.net>\n> Date: Tue, 9 May 2023 16:56:15 -0500\n> Subject: [PATCH v4 1/3] Add reserved_page_space to Page structure\n>\n> This space is reserved for extended data on the Page structure which will be ultimately used for\n> encrypted data, extended checksums, and potentially other things. This data appears at the end of\n> the Page, after any `pd_special` area, and will be calculated at runtime based on specific\n> ControlFile features.\n>\n> No effort is made to ensure this is backwards-compatible with existing clusters for `pg_upgrade`, as\n> we will require logical replication to move data into a cluster with\n> different settings here.\n\nThe first part of the last paragraph makes it sound like pg_upgrade won't be\nsupported across this commit, rather than just between different settings...Thanks for the review. \nI think as a whole this is not an insane idea. A few comments:\n\n- IMO the patch touches many places it shouldn't need to touch, because of\n essentially renaming a lot of existing macro names to *Limit,\n necessitating modifying a lot of users. I think instead the few places that\n care about the runtime limit should be modified.\n\n As-is the patch would cause a lot of fallout in extensions that just do\n things like defining an on-stack array of Datums or such - even though all\n they'd need is to change the define to the *Limit one.\n\n Even leaving extensions aside, it must makes reviewing (and I'm sure\n maintaining) the patch very tedious.\nYou make a good point, and I think you're right that we could teach the places that care about runtime vs compile time differences about the changes while leaving other callers alone. The *Limit ones were introduced since we need constant values here from the Calc...() macros, but could try keeping the existing *Limit with the old name and switching things around. I suspect there will be the same amount of code churn, but less mechanical. \n- I'm a bit worried about how the extra special page will be managed - if\n there are multiple features that want to use it, who gets to put their data\n at what offset?\n\n After writing this I saw that 0002 tries to address this - but I don't like\n the design. It introduces runtime overhead that seems likely to be visible.Agreed this could be optimized. \n- Checking for features using PageGetFeatureOffset() seems the wrong design to\n me - instead of a branch for some feature being disabled, perfectly\n predictable for the CPU, we need to do an external function call every time\n to figure out that yet, checksums are *still* disabled.This is probably not a supported approach (it felt a little icky), but I'd played around with const pointers to structs of const elements, where the initial values of a global var was populated early on (so set once and never changed post init), and the compiler didn't complain and things seemed to work ok; not sure if this approach might help balance the early mutability and constant lookup needs:typedef struct PageFeatureOffsets { const Size feature0offset; const Size feature1offset; ...} PageFeatureOffsets; PageFeatureOffsets offsets = {0};const PageFeatureOffsets *exposedOffsets = &offsets;void InitOffsets() { *((Size*)&offsets.feature0offset) = ...; *((Size*)&offsets.feature1offset) = ...;...}\n- Recomputing offsets every time in PageGetFeatureOffset() seems too\n expensive. The offsets can't change while running as PageGetFeatureOffset()\n have enough information to distinguish between different kinds of relationsYes, this was a simple approach for ease of implementation; there is certainly a way to precompute a lookup table from the page feature bitmask into the offsets themselves or otherwise precompute, turn from function call into inline/macro, etc. \n - so why do we need to recompute offsets on every single page? I'd instead\n add a distinct offset variable for each feature.\nThis would work iff there is a single page feature set across all pages in the cluster; I'm not sure we don't want more flexibility here. \n- Modifying every single PageInit() call doesn't make sense to me. That'll\n just create a lot of breakage for - as far as I can tell - no win.This was a placeholder to allow different features depending on page type; to keep things simple for now I just used the same values here, but we could move this inside PageInit() instead (again, assuming single feature set per cluster). \n- Why is it worth sacrificing space on every page to indicate which features\n were enabled? I think there'd need to be some convincing reasons for\n introducing such overhead.The point here is if we can use either GCM authtag or stronger checksums then we've gained the ability to authenticate the page contents at the cost of reassigning those bits, in a way that would support variable permutations of features for different relations or page types, if so desired. A single global setting here both eliminates that possibility as well as requires external data in order to fully interpret pages. \n- Is it really useful to encode the set of features enabled in a cluster with\n a bitmask? That pretty much precludes utilizing extra page space in\n extensions. We could instead just have an extra cluster-wide file that\n defines a mapping of offset to feature.Given the current design, yes we do need that, which does make it harder to allocate/use from an extension. Due to needing to have consistent offsets for a given feature set (however represented on a page), the implementation load going forward as-is involves ensuring that a given bit always maps to the same offset in the page regardless of additional features available in the future. So the 0'th bit if enabled would always map to the 8 byte chunk at the end of the page, the 1st bit corresponds to some amount of space prior to that, etc. I'm not sure how to get that property without some sort of bitmap or otherwise indexed operation.I get what you're saying as far as the more global approach, and while that does lend itself to some nice properties in terms of extensibility, some of the features (GCM tags in particular) need to be able to control the page offset at a consistent location so we can decode the rest of the page without knowing anything else. Additionally, since the reserved space/page features are configured at initdb time I am unclear how a given extension would even be able to stake a claim here. ...though if we consider this a two-part problem, one of space reservation and one of space usage, that part could be handled via allocating more than the minimum in the reserved_page_space and allowing unallocated page space to be claimed later via some sort of additional functions/other hook. That opens up other questions though, tracking whether said space has ever been initialized and what to do when first accessing existing/new pages as one example.Best,David",
"msg_date": "Mon, 13 Nov 2023 14:03:36 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 18:47:56 -0800, Peter Geoghegan wrote:\n> On Wed, Nov 8, 2023 at 6:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > In conversations with folks (my memory specifically is a discussion with\n> > Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n> > there was a pretty strong push that a page should be able to 'stand\n> > alone' and not depend on something else (eg: pg_control, or whatever) to\n> > provide info needed be able to interpret the page. For my part, I don't\n> > have a particularly strong feeling on that, but that's what lead to this\n> > design.\n> \n> The term that I have used in the past is \"self-contained\". Meaning\n> capable of being decoded more or less as-is, without any metadata, by\n> tools like pg_filedump.\n\nI'm not finding that very convincing - without cluster wide data, like keys, a\ntool like pg_filedump isn't going to be able to do much with encrypted\npages. Given the need to look at some global data, figuring out the offset at\nwhich data starts based on a value in pg_control isn't meaningfully worse than\nhaving the data on each page.\n\nStoring redundant data in each page header, when we've wanted space in the\npage header for plenty other things, just doesn't seem a good use of said\nspace.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:27:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-11-08 18:47:56 -0800, Peter Geoghegan wrote:\n> > On Wed, Nov 8, 2023 at 6:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > In conversations with folks (my memory specifically is a discussion\n> with\n> > > Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n> > > there was a pretty strong push that a page should be able to 'stand\n> > > alone' and not depend on something else (eg: pg_control, or whatever)\n> to\n> > > provide info needed be able to interpret the page. For my part, I\n> don't\n> > > have a particularly strong feeling on that, but that's what lead to\n> this\n> > > design.\n> >\n> > The term that I have used in the past is \"self-contained\". Meaning\n> > capable of being decoded more or less as-is, without any metadata, by\n> > tools like pg_filedump.\n>\n> I'm not finding that very convincing - without cluster wide data, like\n> keys, a\n> tool like pg_filedump isn't going to be able to do much with encrypted\n> pages. Given the need to look at some global data, figuring out the offset\n> at\n> which data starts based on a value in pg_control isn't meaningfully worse\n> than\n> having the data on each page.\n>\n> Storing redundant data in each page header, when we've wanted space in the\n> page header for plenty other things, just doesn't seem a good use of said\n> space.\n>\n\nThis scheme would open up space per page that would now be available for\nplenty of other things; the encoding in the header and the corresponding\navailable space in the footer would seem to open up quite a few options\nnow, no?\n\nOn Mon, Nov 13, 2023 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-11-08 18:47:56 -0800, Peter Geoghegan wrote:\n> On Wed, Nov 8, 2023 at 6:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > In conversations with folks (my memory specifically is a discussion with\n> > Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n> > there was a pretty strong push that a page should be able to 'stand\n> > alone' and not depend on something else (eg: pg_control, or whatever) to\n> > provide info needed be able to interpret the page. For my part, I don't\n> > have a particularly strong feeling on that, but that's what lead to this\n> > design.\n> \n> The term that I have used in the past is \"self-contained\". Meaning\n> capable of being decoded more or less as-is, without any metadata, by\n> tools like pg_filedump.\n\nI'm not finding that very convincing - without cluster wide data, like keys, a\ntool like pg_filedump isn't going to be able to do much with encrypted\npages. Given the need to look at some global data, figuring out the offset at\nwhich data starts based on a value in pg_control isn't meaningfully worse than\nhaving the data on each page.\n\nStoring redundant data in each page header, when we've wanted space in the\npage header for plenty other things, just doesn't seem a good use of said\nspace.This scheme would open up space per page that would now be available for plenty of other things; the encoding in the header and the corresponding available space in the footer would seem to open up quite a few options now, no?",
"msg_date": "Mon, 13 Nov 2023 14:37:47 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 14:37:47 -0600, David Christensen wrote:\n> On Mon, Nov 13, 2023 at 2:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-11-08 18:47:56 -0800, Peter Geoghegan wrote:\n> > > On Wed, Nov 8, 2023 at 6:04 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > In conversations with folks (my memory specifically is a discussion\n> > with\n> > > > Peter G, added to CC, and my apologies to Peter if I'm misremembering)\n> > > > there was a pretty strong push that a page should be able to 'stand\n> > > > alone' and not depend on something else (eg: pg_control, or whatever)\n> > to\n> > > > provide info needed be able to interpret the page. For my part, I\n> > don't\n> > > > have a particularly strong feeling on that, but that's what lead to\n> > this\n> > > > design.\n> > >\n> > > The term that I have used in the past is \"self-contained\". Meaning\n> > > capable of being decoded more or less as-is, without any metadata, by\n> > > tools like pg_filedump.\n> >\n> > I'm not finding that very convincing - without cluster wide data, like\n> > keys, a\n> > tool like pg_filedump isn't going to be able to do much with encrypted\n> > pages. Given the need to look at some global data, figuring out the offset\n> > at\n> > which data starts based on a value in pg_control isn't meaningfully worse\n> > than\n> > having the data on each page.\n> >\n> > Storing redundant data in each page header, when we've wanted space in the\n> > page header for plenty other things, just doesn't seem a good use of said\n> > space.\n> >\n> \n> This scheme would open up space per page that would now be available for\n> plenty of other things; the encoding in the header and the corresponding\n> available space in the footer would seem to open up quite a few options\n> now, no?\n\nSure, if you're willing to rewrite the whole cluster to upgrade and willing to\npermanently sacrifice some data density. If the stored data is actually\nspecific to the page - that is the place to put the data. If not, then the\ntradeoff is much more complicated IMO.\n\nOf course this isn't a new problem - storing the page size on each page was\njust silly, it's never going to change across the cluster and even more\ndefinitely not going to change within a single relation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:52:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > This scheme would open up space per page that would now be available for\n> > plenty of other things; the encoding in the header and the corresponding\n> > available space in the footer would seem to open up quite a few options\n> > now, no?\n>\n> Sure, if you're willing to rewrite the whole cluster to upgrade and\n> willing to\n> permanently sacrifice some data density. If the stored data is actually\n> specific to the page - that is the place to put the data. If not, then the\n> tradeoff is much more complicated IMO.\n>\n> Of course this isn't a new problem - storing the page size on each page was\n> just silly, it's never going to change across the cluster and even more\n> definitely not going to change within a single relation.\n>\n\nCrazy idea; since stored pagesize is already a fixed cost that likely isn't\ngoing away, what if instead of the pd_checksum field, we instead\nreinterpret pd_pagesize_version; 4 would mean \"no page features\", but\nanything 5 or higher could be looked up as an external page feature set,\nwith storage semantics outside of the realm of the page itself (other than\nwhat the page features code itself needs to know); i.e,. move away from the\non-page bitmap into a more abstract representation of features which could\nbe something along the lines of what you were suggesting, including\nextension support.\n\nIt seems like this could also support adding/removing features on page\nread/write as long as there was sufficient space in the reserved_page\nspace; read the old feature set on page read, convert to the new feature\nset which will write out the page with the additional/changed format.\nObviously there would be bookkeeping to be done in terms of making sure all\npages had been converted from one format to another, but for the page level\nthis would be straightforward.\n\nJust thinking aloud here...\n\nDavid\n\nOn Mon, Nov 13, 2023 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n> This scheme would open up space per page that would now be available for\n> plenty of other things; the encoding in the header and the corresponding\n> available space in the footer would seem to open up quite a few options\n> now, no?\n\nSure, if you're willing to rewrite the whole cluster to upgrade and willing to\npermanently sacrifice some data density. If the stored data is actually\nspecific to the page - that is the place to put the data. If not, then the\ntradeoff is much more complicated IMO.\n\nOf course this isn't a new problem - storing the page size on each page was\njust silly, it's never going to change across the cluster and even more\ndefinitely not going to change within a single relation.Crazy idea; since stored pagesize is already a fixed cost that likely isn't going away, what if instead of the pd_checksum field, we instead reinterpret pd_pagesize_version; 4 would mean \"no page features\", but anything 5 or higher could be looked up as an external page feature set, with storage semantics outside of the realm of the page itself (other than what the page features code itself needs to know); i.e,. move away from the on-page bitmap into a more abstract representation of features which could be something along the lines of what you were suggesting, including extension support.It seems like this could also support adding/removing features on page read/write as long as there was sufficient space in the reserved_page space; read the old feature set on page read, convert to the new feature set which will write out the page with the additional/changed format. Obviously there would be bookkeeping to be done in terms of making sure all pages had been converted from one format to another, but for the page level this would be straightforward.Just thinking aloud here...David",
"msg_date": "Mon, 13 Nov 2023 15:53:18 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Nov 13, 2023 at 16:53 David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> On Mon, Nov 13, 2023 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>>\n>> > This scheme would open up space per page that would now be available for\n>> > plenty of other things; the encoding in the header and the corresponding\n>> > available space in the footer would seem to open up quite a few options\n>> > now, no?\n>>\n>> Sure, if you're willing to rewrite the whole cluster to upgrade and\n>> willing to\n>> permanently sacrifice some data density. If the stored data is actually\n>> specific to the page - that is the place to put the data. If not, then the\n>> tradeoff is much more complicated IMO.\n>>\n>> Of course this isn't a new problem - storing the page size on each page\n>> was\n>> just silly, it's never going to change across the cluster and even more\n>> definitely not going to change within a single relation.\n>>\n>\n> Crazy idea; since stored pagesize is already a fixed cost that likely\n> isn't going away, what if instead of the pd_checksum field, we instead\n> reinterpret pd_pagesize_version; 4 would mean \"no page features\", but\n> anything 5 or higher could be looked up as an external page feature set,\n> with storage semantics outside of the realm of the page itself (other than\n> what the page features code itself needs to know); i.e,. move away from the\n> on-page bitmap into a more abstract representation of features which could\n> be something along the lines of what you were suggesting, including\n> extension support.\n>\n> It seems like this could also support adding/removing features on page\n> read/write as long as there was sufficient space in the reserved_page\n> space; read the old feature set on page read, convert to the new feature\n> set which will write out the page with the additional/changed format.\n> Obviously there would be bookkeeping to be done in terms of making sure all\n> pages had been converted from one format to another, but for the page level\n> this would be straightforward.\n>\n> Just thinking aloud here...\n>\n\nIn other crazy idea space … if the page didn’t have enough space to allow\nfor the desired features then make any insert/update actions forcibly have\nto choose a different page for the new tuple, while allowing delete’s to do\ntheir usual thing, and then when vacuum comes along and is able to clean up\nthe page and remove the all dead tuples, it could then enable the features\non the page that are desired…\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Nov 13, 2023 at 16:53 David Christensen <david.christensen@crunchydata.com> wrote:On Mon, Nov 13, 2023 at 2:52 PM Andres Freund <andres@anarazel.de> wrote:\n> This scheme would open up space per page that would now be available for\n> plenty of other things; the encoding in the header and the corresponding\n> available space in the footer would seem to open up quite a few options\n> now, no?\n\nSure, if you're willing to rewrite the whole cluster to upgrade and willing to\npermanently sacrifice some data density. If the stored data is actually\nspecific to the page - that is the place to put the data. If not, then the\ntradeoff is much more complicated IMO.\n\nOf course this isn't a new problem - storing the page size on each page was\njust silly, it's never going to change across the cluster and even more\ndefinitely not going to change within a single relation.Crazy idea; since stored pagesize is already a fixed cost that likely isn't going away, what if instead of the pd_checksum field, we instead reinterpret pd_pagesize_version; 4 would mean \"no page features\", but anything 5 or higher could be looked up as an external page feature set, with storage semantics outside of the realm of the page itself (other than what the page features code itself needs to know); i.e,. move away from the on-page bitmap into a more abstract representation of features which could be something along the lines of what you were suggesting, including extension support.It seems like this could also support adding/removing features on page read/write as long as there was sufficient space in the reserved_page space; read the old feature set on page read, convert to the new feature set which will write out the page with the additional/changed format. Obviously there would be bookkeeping to be done in terms of making sure all pages had been converted from one format to another, but for the page level this would be straightforward.Just thinking aloud here...In other crazy idea space … if the page didn’t have enough space to allow for the desired features then make any insert/update actions forcibly have to choose a different page for the new tuple, while allowing delete’s to do their usual thing, and then when vacuum comes along and is able to clean up the page and remove the all dead tuples, it could then enable the features on the page that are desired…Thanks,Stephen",
"msg_date": "Mon, 13 Nov 2023 17:08:22 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 6:20 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> - IMO the patch touches many places it shouldn't need to touch, because of\n> essentially renaming a lot of existing macro names to *Limit,\n> necessitating modifying a lot of users. I think instead the few places\n> that\n> care about the runtime limit should be modified.\n>\n> As-is the patch would cause a lot of fallout in extensions that just do\n> things like defining an on-stack array of Datums or such - even though\n> all\n> they'd need is to change the define to the *Limit one.\n>\n> Even leaving extensions aside, it must makes reviewing (and I'm sure\n> maintaining) the patch very tedious.\n>\n\nHi Andres et al,\n\nSo I've been looking at alternate approaches to this issue and considering\nhow to reduce churn, and I think we still need the *Limit variants. Let's\ntake a simple example:\n\nJust looking at MaxHeapTuplesPerPage and breaking down instances in the\ncode, loosely partitioning into whether it's used as an array index or\nother usage (doesn't discriminate against code vs comments, unfortunately)\nwe get the following breakdown:\n\n$ git grep -hoE [[]?MaxHeapTuplesPerPage | sort | uniq -c\n 18 [MaxHeapTuplesPerPage\n 51 MaxHeapTuplesPerPage\n\nThis would be 18 places where we would need at adjust in a fairly\nmechanical fashion to add the MaxHeapTuplesPerPageLimit instead of\nMaxHeapTuplesPerPage vs some significant fraction of non-comment--even if\nyou assumed half were in comments, there would presumably need to be some\nsort of adjustments in verbage since we are going to be changing some of\nthe interpretation.\n\nI am working on a patch to cleanup some of the assumptions that smgr makes\ncurrently about its space usage and how the individual access methods\nconsider it, as they should only be calculating things based on how much\nspace is available after smgr is done with it. That has traditionally been\nBLCKSZ - SizeOfPageHeaderData, but this patch (included) factors that out\ninto a single expression that we can now use in access methods, so we can\nthen reserve additional page space and not need to adjust the access\nmethods furter.\n\nBuilding on top of this patch, we'd define something like this to handle\nthe #defines that need to be dynamic:\n\nextern Size reserved_page_space;\n#define PageUsableSpace (BLCKSZ - SizeOfPageHeaderData -\nreserved_page_space)\n#define MaxHeapTuplesPerPage CalcMaxHeapTuplesPerPage(PageUsableSpace)\n#define MaxHeapTuplesPerPageLimit CalcMaxHeapTuplesPerPage(BLCKSZ -\nSizeOfPageHeaderData)\n#define CalcMaxHeapTuplesPerPage(freesize)\n ((int) ((freesize) / \\\n (MAXALIGN(SizeofHeapTupleHeader) +\nsizeof(ItemIdData))))\n\nIn my view, extensions that are expecting to need no changes when it comes\nto changing how these are interpreted are better off needing to only change\nthe static allocation in a mechanical sense than revisit any other uses of\ncode; this seems more likely to guarantee a correct result than if you\nexceed the page space and start overwriting things you weren't because\nyou're not aware that you need to check for dynamic limits on your own.\n\nTake another thing which would need adjusting for reserving page space,\nMaxHeapTupleSize:\n\n$ git grep -ohE '[[]?MaxHeapTupleSize' | sort | uniq -c\n 3 [MaxHeapTupleSize\n 16 MaxHeapTupleSize\n\nHere there are 3 static arrays which would need to be adjusted vs 16 other\ninstances. If we kept MaxHeapTupleSize interpretation the same and didn't\nadjust an extension it would compile just fine, but with too large of a\nlength compared to the smaller PageUsableSpace, so you could conceivably\noverwrite into the reserved space depending on what you were doing.\n\n(since by definition the reserved_page_space >= 0, so PageUsableSpace will\nalways be <= BLCKSZ - SizeOfPageHeaderData, so any expression based on it\nas a basis will be smaller).\n\nIn short, I think the approach I took originally actually will reduce\nerrors out-of-core, and while churn is still necessary churn.\n\nI can produce a second patch which implements this calc/limit atop this\nfirst one as well.\n\nThanks,\n\nDavid",
"msg_date": "Wed, 29 Nov 2023 09:12:31 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi again!\n\nPer some offline discussion with Stephen, I've continued to work on some\nmodifications here; this particular patchset is intended to facilitate\nreview by highlighting the mechanical nature of many of these changes. As\nsuch, I have taken the following approach to this rework:\n\n0001 - Create PageUsableSpace to represent space post-smgr\n0002 - Add support for fast, non-division-based div/mod algorithms\n0003 - Use fastdiv code in visibility map\n0004 - Make PageUsableSpace incorporate variable-sized limit\n0005 - Add Calc, Limit and Dynamic forms of all variable constants\n0006 - Split MaxHeapTuplesPerPage into Limit and Dynamic variants\n0007 - Split MaxIndexTuplesPerPage into Limit and Dynamic variants\n0008 - Split MaxHeapTupleSize into Limit and Dynamic variants\n0009 - Split MaxTIDsPerBTreePage into Limit and Dynamic variant\n\n0001 - 0003 have appeared in this thread or in other forms on the list\nalready, though 0001 refactors things slightly more aggressively, but makes\nStaticAssert() to ensure that this change is still sane.\n\n0004 adds the ReservedPageSpace variable, and also redefines the previous\nBLCKSZ - SizeOfPageHeaderDate as PageUsableSpaceMax; there are a few\nrelated fixups.\n\n0005 adds the macros to compute the former constants while leaving their\noriginal definitions to evaluate to the same place (the infamous Calc* and\n*Limit, plus we invite *Dynamic to the party as well; the names are\nterrible and there must be something better)\n\n0006 - 0009 are all the same approach; we undefine the old constant name\nand modify the existing uses of this symbol to be either the *Limit or\n*Dynamic, depending on if the changed available space would impact the\ncalculations. Since we are touching every use of this symbol, this\nfacilitates review of the impact, though I would contend that almost every\npiece I've spot-checked seems like it really does need to know about the\nruntime limit. Perhaps there is more we could do here. I could also see a\nvariable per constant rather than recalculating this every time, in which\ncase the *Dynamic would just be the variable and we'd need a hook to\ninitialize this or otherwise set on first use.\n\nThere are a number of additional things remaining to be done to get this to\nfully work, but I did want to get some of this out there for review.\n\nStill to do (almost all in some form in original patch, so just need to\nextract the relevant pieces):\n- set reserved-page-size via initdb\n- load reserved-page-size from pg_control\n- apply to the running cluster\n- some form of compatibility for these constants in common and ensuring\nbin/ works\n- some toast-related changes (this requires a patch to support dynamic\nrelopts, which I can extract, as the existing code is using a constant\nlookup table)\n- probably some more pieces I'm forgetting\n\nThanks,\nDavid",
"msg_date": "Fri, 22 Dec 2023 15:24:50 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi,\n\nI have finished the reworking of this particular patch series, and have\ntried to\norganize this in such a way that it will be easily reviewable. It is\nconstructed progressively to be able to follow what is happening here. As\nsuch,\neach individual commit is not guaranteed to compile on its own, so the whole\nseries would need to be applied before it works. (It does pass CI tests.)\n\nHere is a brief roadmap of the patches; some of them have additional\ndetails in\nthe commit message describing a little more about them.\n\nThese two patches do some refactoring of existing code to make a common\nplace to\nmodify the definitions:\n\nv3-0001-refactor-Create-PageUsableSpace-to-represent-spac.patch\nv3-0002-refactor-Make-PageGetUsablePageSize-routine.patch\n\nThese two patches add the ReservedPageSize variable and teach PageInit to\nuse to\nadjust sizing accordingly:\n\nv3-0003-feature-Add-ReservedPageSize-variable.patch\nv3-0004-feature-Adjust-page-sizes-at-PageInit.patch\n\nThis patch modifies the definitions of 4 symbols to be computed based on\nPageUsableSpace:\n\nv3-0005-feature-Create-Calc-Limit-and-Dynamic-forms-for-f.patch\n\nThese following 4 patches are mechanical replacements of all existing uses\nof\nthese symbols; this provides both visibility into where the existing symbol\nis\nused as well as distinguishing between parts that care about static\nallocation\nvs dynamic usage. The only non-mechanical change is to remove the\ndefinition of\nthe old symbol so we can be guaranteed that all uses have been considered:\n\nv3-0006-chore-Split-MaxHeapTuplesPerPage-into-Limit-and-D.patch\nv3-0007-chore-Split-MaxIndexTuplesPerPage-into-Limit-and-.patch\nv3-0008-chore-Split-MaxHeapTupleSize-into-Limit-and-Dynam.patch\nv3-0009-chore-Split-MaxTIDsPerBTreePage-into-Limit-and-Dy.patch\n\nThe following patches are related to required changes to support dynamic\ntoast\nlimits:\n\nv3-0010-feature-Add-hook-for-setting-reloptions-defaults-.patch\nv3-0011-feature-Dynamically-calculate-toast_tuple_target.patch\nv3-0012-feature-Add-Calc-options-for-toast-related-pieces.patch\nv3-0013-chore-Replace-TOAST_MAX_CHUNK_SIZE-with-ClusterTo.patch\nv3-0014-chore-Translation-updates-for-TOAST_MAX_CHUNK_SIZ.patch\n\nIn order to calculate some of the sizes, we need to include nbtree.h\ninternals,\nbut we can't use in front-end apps, so we separate out the pieces we care\nabout\ninto a separate include and use that:\n\nv3-0015-chore-Split-nbtree.h-structure-defs-into-an-inter.patch\n\nThis is the meat of the patch; provide a common location for these\nblock-size-related constants to be computed using the infra that has been\nset up\nso far. Also ensure that we are properly initializing this in front end and\nback end code. A tricky piece here is we have two separate include files\nfor\nblocksize.h; one which exposes externs as consts for optimizations, and one\nthat\nblocksize.c itself uses without consts, which it uses to create/initialized\nthe\nvars:\n\nv3-0016-feature-Calculate-all-blocksize-constants-in-a-co.patch\n\nAdd ControlFile and GUC support for reserved_page_size:\n\nv3-0017-feature-ControlFile-GUC-support-for-reserved_page.patch\n\nAdd initdb support for reserving page space:\n\nv3-0018-feature-Add-reserved_page_size-to-initdb-bootstra.patch\n\nFixes for pg_resetwal:\n\nv3-0019-feature-Updates-for-pg_resetwal.patch\n\nThe following 4 patches mechanically replace the Dynamic form to use the new\nCluster variables:\n\nv3-0020-chore-Rename-MaxHeapTupleSizeDynamic-to-ClusterMa.patch\nv3-0021-chore-Rename-MaxHeapTuplesPerPageDynamic-to-Clust.patch\nv3-0022-chore-Rename-MaxIndexTuplesPerPageDynamic-to-Clus.patch\nv3-0023-chore-Rename-MaxTIDsPerBTreePageDynamic-to-Cluste.patch\n\nTwo pieces of optimization required for visibility map:\n\nv3-0024-optimization-Add-support-for-fast-non-division-ba.patch\nv3-0025-optimization-Use-fastdiv-code-in-visibility-map.patch\n\nUpdate bufpage.h comments:\n\nv3-0026-doc-update-bufpage-docs-w-reserved-space-data.patch\n\nFixes for bloom to use runtime size:\n\nv3-0027-feature-Teach-bloom-about-PageUsableSpace.patch\n\nFixes for FSM to use runtime size:\n\nv3-0028-feature-teach-FSM-about-reserved-page-space.patch\n\nI hope this makes sense for reviewing, I know it's a big job, so breaking\nthings up a little more and organizing will hopefully help.\n\nBest,\n\nDavid",
"msg_date": "Fri, 19 Jan 2024 12:49:03 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi David,\n\n> I have finished the reworking of this particular patch series, and have tried to\n> organize this in such a way that it will be easily reviewable. It is\n> constructed progressively to be able to follow what is happening here. As such,\n> each individual commit is not guaranteed to compile on its own, so the whole\n> series would need to be applied before it works. (It does pass CI tests.)\n>\n> Here is a brief roadmap of the patches; some of them have additional details in\n> the commit message describing a little more about them.\n>\n> These two patches do some refactoring of existing code to make a common place to\n> modify the definitions:\n>\n> v3-0001-refactor-Create-PageUsableSpace-to-represent-spac.patch\n> v3-0002-refactor-Make-PageGetUsablePageSize-routine.patch\n>\n> These two patches add the ReservedPageSize variable and teach PageInit to use to\n> adjust sizing accordingly:\n>\n> v3-0003-feature-Add-ReservedPageSize-variable.patch\n> v3-0004-feature-Adjust-page-sizes-at-PageInit.patch\n>\n> This patch modifies the definitions of 4 symbols to be computed based on\n> PageUsableSpace:\n>\n> v3-0005-feature-Create-Calc-Limit-and-Dynamic-forms-for-f.patch\n>\n> These following 4 patches are mechanical replacements of all existing uses of\n> these symbols; this provides both visibility into where the existing symbol is\n> used as well as distinguishing between parts that care about static allocation\n> vs dynamic usage. The only non-mechanical change is to remove the definition of\n> the old symbol so we can be guaranteed that all uses have been considered:\n>\n> v3-0006-chore-Split-MaxHeapTuplesPerPage-into-Limit-and-D.patch\n> v3-0007-chore-Split-MaxIndexTuplesPerPage-into-Limit-and-.patch\n> v3-0008-chore-Split-MaxHeapTupleSize-into-Limit-and-Dynam.patch\n> v3-0009-chore-Split-MaxTIDsPerBTreePage-into-Limit-and-Dy.patch\n>\n> The following patches are related to required changes to support dynamic toast\n> limits:\n>\n> v3-0010-feature-Add-hook-for-setting-reloptions-defaults-.patch\n> v3-0011-feature-Dynamically-calculate-toast_tuple_target.patch\n> v3-0012-feature-Add-Calc-options-for-toast-related-pieces.patch\n> v3-0013-chore-Replace-TOAST_MAX_CHUNK_SIZE-with-ClusterTo.patch\n> v3-0014-chore-Translation-updates-for-TOAST_MAX_CHUNK_SIZ.patch\n>\n> In order to calculate some of the sizes, we need to include nbtree.h internals,\n> but we can't use in front-end apps, so we separate out the pieces we care about\n> into a separate include and use that:\n>\n> v3-0015-chore-Split-nbtree.h-structure-defs-into-an-inter.patch\n>\n> This is the meat of the patch; provide a common location for these\n> block-size-related constants to be computed using the infra that has been set up\n> so far. Also ensure that we are properly initializing this in front end and\n> back end code. A tricky piece here is we have two separate include files for\n> blocksize.h; one which exposes externs as consts for optimizations, and one that\n> blocksize.c itself uses without consts, which it uses to create/initialized the\n> vars:\n>\n> v3-0016-feature-Calculate-all-blocksize-constants-in-a-co.patch\n>\n> Add ControlFile and GUC support for reserved_page_size:\n>\n> v3-0017-feature-ControlFile-GUC-support-for-reserved_page.patch\n>\n> Add initdb support for reserving page space:\n>\n> v3-0018-feature-Add-reserved_page_size-to-initdb-bootstra.patch\n>\n> Fixes for pg_resetwal:\n>\n> v3-0019-feature-Updates-for-pg_resetwal.patch\n>\n> The following 4 patches mechanically replace the Dynamic form to use the new\n> Cluster variables:\n>\n> v3-0020-chore-Rename-MaxHeapTupleSizeDynamic-to-ClusterMa.patch\n> v3-0021-chore-Rename-MaxHeapTuplesPerPageDynamic-to-Clust.patch\n> v3-0022-chore-Rename-MaxIndexTuplesPerPageDynamic-to-Clus.patch\n> v3-0023-chore-Rename-MaxTIDsPerBTreePageDynamic-to-Cluste.patch\n>\n> Two pieces of optimization required for visibility map:\n>\n> v3-0024-optimization-Add-support-for-fast-non-division-ba.patch\n> v3-0025-optimization-Use-fastdiv-code-in-visibility-map.patch\n>\n> Update bufpage.h comments:\n>\n> v3-0026-doc-update-bufpage-docs-w-reserved-space-data.patch\n>\n> Fixes for bloom to use runtime size:\n>\n> v3-0027-feature-Teach-bloom-about-PageUsableSpace.patch\n>\n> Fixes for FSM to use runtime size:\n>\n> v3-0028-feature-teach-FSM-about-reserved-page-space.patch\n>\n> I hope this makes sense for reviewing, I know it's a big job, so breaking things up a little more and organizing will hopefully help.\n\nJust wanted to let you know that the patchset seems to need a rebase,\naccording to cfbot.\n\nBest regards,\nAleksander Alekseev (wearing a co-CFM hat)\n\n\n",
"msg_date": "Tue, 12 Mar 2024 16:03:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
},
{
"msg_contents": "Hi Aleksander et al,\n\nEnclosing v4 for this patch series, rebased atop the\nconstant-splitting series[1]. For the purposes of having cfbot happy,\nI am including the prerequisites as a squashed commit v4-0000, however\nthis is not technically part of this series.\n\nThe roadmap this time is similar to the last series, with some\nimprovements being made in terms of a few bug fixes and other\nreorganizations/cleanups. With the prerequisite/rework, we are able\nto eliminate some number of patches in the previous series.\n\nSquashed prerequisites, out of scope for review:\nv4-0000-squashed-prerequisites.patch\n\nRefactoring some of the existing uses of BLCKSZ and SizeOfPageHeaderData:\nv4-0001-refactor-Create-PageUsableSpace-to-represent-spac.patch\nv4-0002-refactor-Make-PageGetUsablePageSize-routine.patch\nv4-0003-feature-Add-ReservedPageSize-variable.patch\nv4-0004-feature-Adjust-page-sizes-at-PageInit.patch\n\nMaking TOAST dynamic:\nv4-0005-feature-Add-hook-for-setting-reloptions-defaults-.patch\nv4-0006-feature-Add-Calc-options-for-toast-related-pieces.patch\nv4-0007-feature-Dynamically-calculate-toast_tuple_target.patch\nv4-0008-chore-Replace-TOAST_MAX_CHUNK_SIZE-with-ClusterTo.patch\nv4-0009-chore-Translation-updates-for-TOAST_MAX_CHUNK_SIZ.patch\n\nInfra/support for blocksize calculations:\nv4-0010-chore-Split-nbtree.h-structure-defs-into-an-inter.patch\nv4-0011-Control-File-support-for-reserved_page_size.patch\nv4-0012-feature-Calculate-all-blocksize-constants-in-a-co.patch\n\nGUC/initdb/bootstrap support for setting reserved-page-size:\nv4-0013-GUC-for-reserved_page_size.patch\nv4-0014-feature-Add-reserved_page_size-to-initdb-bootstra.patch\nv4-0015-feature-Updates-for-pg_resetwal.patch\n\nOptimization of VisMap:\nv4-0016-optimization-Add-support-for-fast-non-division-ba.patch\nv4-0017-optimization-Use-fastdiv-code-in-visibility-map.patch\n\nDocs:\nv4-0018-doc-update-bufpage-docs-w-reserved-space-data.patch\n\nMisc cleanup/fixes:\nv4-0019-feature-Teach-bloom-about-PageUsableSpace.patch\nv4-0020-feature-teach-FSM-about-reserved-page-space.patch\nv4-0021-feature-expose-reserved_page_size-in-SQL-controld.patch\n\nWrite out of init options that are relevant:\nv4-0022-feature-save-out-our-initialization-options.patch\n\nA few notes:\n- There was a bug in the previous VisMap in v3 which resulted in\ntreating the page size as smaller than it was. This has been fixed.\n\n- v4-0022 is new, but useful for the page features going forward, and\nshould simplify some things like using `pg_resetwal` or other places\nthat really need to know how initdb was initialized.\n\n- I have done some performance metrics with this feature vs unpatched\npostgres. Since the biggest place this seemed to affect was the\nvisibility map (per profiling), I constructed an index-only scan test\ncase which basically measured nested loop against index-only lookups\nwith something like 20M rows in the index and 1M generate_series\noptions, measuring the differences between the approach we are using\n(and several others), and showing a trimmean of < 0.005 in execution\ntime.[2] This seems acceptable (if not just noise), so would be\ninterested in any sorts of performance deviations others encounter.\n\nThanks,\n\nDavid\n\n[1] https://commitfest.postgresql.org/47/4828/\n[2] https://www.pgguru.net/2024-03-13-vismap-benchmarking.txt",
"msg_date": "Wed, 13 Mar 2024 11:26:48 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Post-special page storage TDE support"
}
] |
[
{
"msg_contents": "Hello!\n\nPlease, could somebody explain what the \"compound\" queries were created for?\nMaybe i'm calling them wrong. It's about queries like:\nSELECT 1 + 2 \\; SELECT 2.0 AS \"float\" \\; SELECT 1;\n\nSuch queries can neither be prepared nor used in the extended protocol with\n ERROR: cannot insert multiple commands into a prepared statement.\nWhat are their advantages?\nAnd what is the proper name for such queries? \"Compound\" or something else?\nWould be very grateful for clarification.\n\nBest wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 25 Oct 2022 01:02:17 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Question about \"compound\" queries."
},
{
"msg_contents": "On Mon, Oct 24, 2022 at 3:02 PM Anton A. Melnikov <aamelnikov@inbox.ru>\nwrote:\n\n> Hello!\n>\n> Please, could somebody explain what the \"compound\" queries were created\n> for?\n> Maybe i'm calling them wrong. It's about queries like:\n> SELECT 1 + 2 \\; SELECT 2.0 AS \"float\" \\; SELECT 1;\n>\n> Such queries can neither be prepared nor used in the extended protocol with\n> ERROR: cannot insert multiple commands into a prepared statement.\n> What are their advantages?\n> And what is the proper name for such queries? \"Compound\" or something else?\n> Would be very grateful for clarification.\n>\n\nI suspect they came about out of simplicity - being able to simply take a\ntext file with a bunch of SQL commands in a script and send them as-is to\nthe server without any client-side parsing and let the server just deal\nwith it. It works because the system needs to do those kinds of things\nanyway so, why not make it user-facing, even if most uses would find its\nrestrictions makes it undesirable to use.\n\nDavid J.\n\nOn Mon, Oct 24, 2022 at 3:02 PM Anton A. Melnikov <aamelnikov@inbox.ru> wrote:Hello!\n\nPlease, could somebody explain what the \"compound\" queries were created for?\nMaybe i'm calling them wrong. It's about queries like:\nSELECT 1 + 2 \\; SELECT 2.0 AS \"float\" \\; SELECT 1;\n\nSuch queries can neither be prepared nor used in the extended protocol with\n ERROR: cannot insert multiple commands into a prepared statement.\nWhat are their advantages?\nAnd what is the proper name for such queries? \"Compound\" or something else?\nWould be very grateful for clarification.I suspect they came about out of simplicity - being able to simply take a text file with a bunch of SQL commands in a script and send them as-is to the server without any client-side parsing and let the server just deal with it. It works because the system needs to do those kinds of things anyway so, why not make it user-facing, even if most uses would find its restrictions makes it undesirable to use.David J.",
"msg_date": "Mon, 24 Oct 2022 15:36:10 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about \"compound\" queries."
},
{
"msg_contents": "Thanks a lot for the reply and timely help!\n\nOn 25.10.2022 01:36, David G. Johnston wrote:\n\n> I suspect they came about out of simplicity - being able to simply take a text file with a bunch of SQL commands in a script and send them as-is to the server without any client-side parsing and let the server just deal with it. It works because the system needs to do those kinds of things anyway so, why not make it user-facing, even if most uses would find its restrictions makes it undesirable to use.\n> \n> David J.\n> \n\nAll the best,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 25 Oct 2022 14:45:28 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: Question about \"compound\" queries."
}
] |
[
{
"msg_contents": "The AssignTransactionId has the following comments:\n\n /*\n * ensure this test matches similar one in\n * RecoverPreparedTransactions()\n */\n if (nUnreportedXids >= PGPROC_MAX_CACHED_SUBXIDS ||\n log_unknown_top)\n {\n ...\n }\n\nHowever, RecoverPreparedTransactions removes this reference in 49e9281549.\nAttached remove this reference in AssignTransactionId.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 25 Oct 2022 11:36:48 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Outdated comments in AssignTransactionId?"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs mentioned in [1], there is no regression tests for the SQL control\nfunctions: pg_control_checkpoint, pg_control_recovery,\npg_control_system and pg_control_init.\n\nIt would be minimal to check their execution, as of a \"SELECT FROM\nfunc()\", still some validation can be done on its output as long as\nthe test is portable enough (needs transparency for wal_level, commit\ntimestamps, etc.).\n\nAttached is a proposal to provide some coverage. Some of the checks\ncould be just removed, like the ones for non-NULL fields, but I have\nwritten out everything to show how much could be done.\n\nThoughts?\n\n[1]: https://www.postgresql.org/message-id/YzY0iLxNbmaxHpbs@paquier.xyz\n--\nMichael",
"msg_date": "Tue, 25 Oct 2022 14:37:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Some regression tests for the pg_control_*() functions"
},
{
"msg_contents": "On Tue, Oct 25, 2022 at 11:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> As mentioned in [1], there is no regression tests for the SQL control\n> functions: pg_control_checkpoint, pg_control_recovery,\n> pg_control_system and pg_control_init.\n>\n> It would be minimal to check their execution, as of a \"SELECT FROM\n> func()\", still some validation can be done on its output as long as\n> the test is portable enough (needs transparency for wal_level, commit\n> timestamps, etc.).\n>\n> Attached is a proposal to provide some coverage. Some of the checks\n> could be just removed, like the ones for non-NULL fields, but I have\n> written out everything to show how much could be done.\n>\n> Thoughts?\n>\n> [1]: https://www.postgresql.org/message-id/YzY0iLxNbmaxHpbs@paquier.xyz\n\n+1 for improving the test coverage. Is there a strong reason to\nvalidate individual output columns rather than select count(*) > 0\nfrom pg_control_XXXX(); sort of tests? If the intention is to validate\nthe pg_controlfile contents, we have pg_controldata to look at and\npg_control_XXXX() functions doing crc checks. If this isn't enough, we\ncan have the pg_control_validate() function to do all the necessary\nchecks and simplify the tests, no?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Oct 2022 10:13:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some regression tests for the pg_control_*() functions"
},
{
"msg_contents": "On Wed, Oct 26, 2022 at 10:13:29AM +0530, Bharath Rupireddy wrote:\n> +1 for improving the test coverage. Is there a strong reason to\n> validate individual output columns rather than select count(*) > 0\n> from pg_control_XXXX(); sort of tests? If the intention is to validate\n> the pg_controlfile contents, we have pg_controldata to look at and\n> pg_control_XXXX() functions doing crc checks.\n\nAnd it could be possible that the control file finishes by writing\nsome incorrect data due to a bug in the backend. Adding a count(*) or\nsimilar to get the number of fields of the function is basically the\nsame as checking its execution, still I'd like to think that having a\nminimum set of checks would be kind of nice on top of that. Among all\nthe ones I wrote in the patch upthread, the following ones would be in\nmy minimalistic list:\n- timeline_id > 0\n- timeline_id >= prev_timeline_id\n- checkpoint_lsn >= redo_lsn\n- data_page_checksum_version >= 0\n- Perhaps the various fields of pg_control_init() using their\nlower-bound values.\n- Perhaps pg_control_version and/or catalog_version_no > NN\n\n> If this isn't enough, we\n> can have the pg_control_validate() function to do all the necessary\n> checks and simplify the tests, no?\n\nThere is no function like that. Perhaps that you mean to introduce\nsomething like that at the C level, but that does not seem necessary\nto me as long as a SQL is able to do the job for the most meaningful\nparts.\n--\nMichael",
"msg_date": "Wed, 26 Oct 2022 16:18:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some regression tests for the pg_control_*() functions"
},
{
"msg_contents": "On Wed, Oct 26, 2022 at 12:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 26, 2022 at 10:13:29AM +0530, Bharath Rupireddy wrote:\n> > +1 for improving the test coverage. Is there a strong reason to\n> > validate individual output columns rather than select count(*) > 0\n> > from pg_control_XXXX(); sort of tests? If the intention is to validate\n> > the pg_controlfile contents, we have pg_controldata to look at and\n> > pg_control_XXXX() functions doing crc checks.\n>\n> And it could be possible that the control file finishes by writing\n> some incorrect data due to a bug in the backend.\n\nWe will have bigger problems when a backend corrupts the pg_control\nfile, no? The bigger problems could be that the server won't come up\nor it behaves abnormally or some other.\n\n> Adding a count(*) or\n> similar to get the number of fields of the function is basically the\n> same as checking its execution, still I'd like to think that having a\n> minimum set of checks would be kind of nice on top of that. Among all\n> the ones I wrote in the patch upthread, the following ones would be in\n> my minimalistic list:\n> - timeline_id > 0\n> - timeline_id >= prev_timeline_id\n> - checkpoint_lsn >= redo_lsn\n> - data_page_checksum_version >= 0\n> - Perhaps the various fields of pg_control_init() using their\n> lower-bound values.\n> - Perhaps pg_control_version and/or catalog_version_no > NN\n\nCan't the CRC check detect any of the above corruptions? Do we have\nany evidence of backend corrupting the pg_control file or any of the\nabove variables while running regression tests?\n\nIf the concern is backend corrupting the pg_control file and CRC check\ncan't detect it, then the extra checks (as proposed in the patch) must\nbe placed within the core (perhaps before writing/after reading the\npg_control file), not in regression tests for sure.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Oct 2022 13:41:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some regression tests for the pg_control_*() functions"
},
{
"msg_contents": "On Wed, Oct 26, 2022 at 01:41:12PM +0530, Bharath Rupireddy wrote:\n> We will have bigger problems when a backend corrupts the pg_control\n> file, no? The bigger problems could be that the server won't come up\n> or it behaves abnormally or some other.\n\nPossibly, yes.\n\n> Can't the CRC check detect any of the above corruptions? Do we have\n> any evidence of backend corrupting the pg_control file or any of the\n> above variables while running regression tests?\n\nIt could be possible that the backend writes an incorrect data\ncombination though its APIs, where the CRC is correct but the data is\nnot (say a TLI of 0, as one example).\n\n> If the concern is backend corrupting the pg_control file and CRC check\n> can't detect it, then the extra checks (as proposed in the patch) must\n> be placed within the core (perhaps before writing/after reading the\n> pg_control file), not in regression tests for sure.\n\nWell, that depends on the level of protection you want. Now there are\nthings in place already when it comes to recovery or at startup.\nAnyway, the recent experience with the 56-bit relfilenode thread is\nreally that we don't check the execution of these functions at all,\nand that's the actual minimal requirement, so I have applied a patch\nbased on count(*) > 0 for now to cover that. I am not sure if any of\nthe checks for the control file fields are valuable, perhaps some\nare..\n--\nMichael",
"msg_date": "Thu, 27 Oct 2022 10:03:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Some regression tests for the pg_control_*() functions"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nI'm a bit confused about TransactionIdSetTreeStatus, the comment says if\nsubtransactions cross multiple CLOG pages, it will mark the subxids, that\nare on the same page as the main transaction, as sub-committed, and then\nset main transaction and subtransactions to committed (step 2).\n\n * Example:\n * TransactionId t commits and has subxids t1, t2, t3, t4\n * t is on page p1, t1 is also on p1, t2 and t3 are on p2, t4 is on p3\n * 1. update pages2-3:\n * page2: set t2,t3 as sub-committed\n * page3: set t4 as sub-committed\n * 2. update page1:\n * set t1 as sub-committed,\n * then set t as committed,\n then set t1 as committed\n * 3. update pages2-3:\n * page2: set t2,t3 as committed\n * page3: set t4 as committed\n\nHowever, the code marks the main transaction and subtransactions directly\nto the committed.\n\n /*\n * If this is a commit then we care about doing this correctly (i.e.\n * using the subcommitted intermediate status). By here, we know\n * we're updating more than one page of clog, so we must mark entries\n * that are *not* on the first page so that they show as subcommitted\n * before we then return to update the status to fully committed.\n *\n * To avoid touching the first page twice, skip marking subcommitted\n * for the subxids on that first page.\n */\n if (status == TRANSACTION_STATUS_COMMITTED)\n set_status_by_pages(nsubxids - nsubxids_on_first_page,\n subxids + nsubxids_on_first_page,\n TRANSACTION_STATUS_SUB_COMMITTED, lsn);\n\n /*\n * Now set the parent and subtransactions on same page as the parent,\n * if any\n */\n pageno = TransactionIdToPage(xid);\n TransactionIdSetPageStatus(xid, nsubxids_on_first_page, subxids, status,\n lsn, pageno, false);\n\n /*\n * Now work through the rest of the subxids one clog page at a time,\n * starting from the second page onwards, like we did above.\n */\n set_status_by_pages(nsubxids - nsubxids_on_first_page,\n subxids + nsubxids_on_first_page,\n status, lsn);\n\nIs the comment correct? If not, should we remove it?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Tue, 25 Oct 2022 17:02:25 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Confused about TransactionIdSetTreeStatus"
},
{
"msg_contents": "On 25/10/2022 12:02, Japin Li wrote:\n> I'm a bit confused about TransactionIdSetTreeStatus, the comment says if\n> subtransactions cross multiple CLOG pages, it will mark the subxids, that\n> are on the same page as the main transaction, as sub-committed, and then\n> set main transaction and subtransactions to committed (step 2).\n> \n> * Example:\n> * TransactionId t commits and has subxids t1, t2, t3, t4\n> * t is on page p1, t1 is also on p1, t2 and t3 are on p2, t4 is on p3\n> * 1. update pages2-3:\n> * page2: set t2,t3 as sub-committed\n> * page3: set t4 as sub-committed\n> * 2. update page1:\n> * set t1 as sub-committed,\n> * then set t as committed,\n> then set t1 as committed\n> * 3. update pages2-3:\n> * page2: set t2,t3 as committed\n> * page3: set t4 as committed\n> \n> However, the code marks the main transaction and subtransactions directly\n> to the committed.\n\nHmm, yeah, step 2 in this example doesn't match reality. We actually set \nt and t1 directly as committed. The explanation above that comment is \ncorrect, but the example is not. It used to work the way the example \nsays, but that was changed in commit \n06da3c570f21394003fc392d80f54862f7dec19f. Ironically, that commit also \nadded the outdated comment.\n\nThe correct example would be:\n\nTransactionId t commits and has subxids t1, t2, t3, t4 t is on page p1, \nt1 is also on p1, t2 and t3 are on p2, t4 is on p3\n1. update pages2-3:\n page2: set t2,t3 as sub-committed\n page3: set t4 as sub-committed\n2. update page1:\n page1: set t,t1 as committed,\n3. update pages2-3:\n page2: set t2,t3 as committed\n page3: set t4 as committed\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 25 Oct 2022 16:46:49 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Confused about TransactionIdSetTreeStatus"
},
{
"msg_contents": "On Tue, 25 Oct 2022 at 22:46, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 25/10/2022 12:02, Japin Li wrote:\n>> However, the code marks the main transaction and subtransactions directly\n>> to the committed.\n>\n> Hmm, yeah, step 2 in this example doesn't match reality. We actually\n> set t and t1 directly as committed. The explanation above that comment\n> is correct, but the example is not. It used to work the way the\n> example says, but that was changed in commit\n> 06da3c570f21394003fc392d80f54862f7dec19f. Ironically, that commit also\n> added the outdated comment.\n>\n> The correct example would be:\n>\n> TransactionId t commits and has subxids t1, t2, t3, t4 t is on page\n> p1, t1 is also on p1, t2 and t3 are on p2, t4 is on p3\n> 1. update pages2-3:\n> page2: set t2,t3 as sub-committed\n> page3: set t4 as sub-committed\n> 2. update page1:\n> page1: set t,t1 as committed,\n> 3. update pages2-3:\n> page2: set t2,t3 as committed\n> page3: set t4 as committed\n>\n\nThanks for your explanation. Attach a patch to remove the outdated comment.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Tue, 25 Oct 2022 23:09:03 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Confused about TransactionIdSetTreeStatus"
},
{
"msg_contents": "On 25/10/2022 18:09, Japin Li wrote:\n> \n> On Tue, 25 Oct 2022 at 22:46, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> On 25/10/2022 12:02, Japin Li wrote:\n>>> However, the code marks the main transaction and subtransactions directly\n>>> to the committed.\n>>\n>> Hmm, yeah, step 2 in this example doesn't match reality. We actually\n>> set t and t1 directly as committed. The explanation above that comment\n>> is correct, but the example is not. It used to work the way the\n>> example says, but that was changed in commit\n>> 06da3c570f21394003fc392d80f54862f7dec19f. Ironically, that commit also\n>> added the outdated comment.\n>>\n>> The correct example would be:\n>>\n>> TransactionId t commits and has subxids t1, t2, t3, t4 t is on page\n>> p1, t1 is also on p1, t2 and t3 are on p2, t4 is on p3\n>> 1. update pages2-3:\n>> page2: set t2,t3 as sub-committed\n>> page3: set t4 as sub-committed\n>> 2. update page1:\n>> page1: set t,t1 as committed,\n>> 3. update pages2-3:\n>> page2: set t2,t3 as committed\n>> page3: set t4 as committed\n> \n> Thanks for your explanation. Attach a patch to remove the outdated comment.\n\nApplied, thanks!\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 25 Oct 2022 21:44:46 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Confused about TransactionIdSetTreeStatus"
}
] |
[
{
"msg_contents": "While fooling with my longstanding outer-join variables changes\n(I am making progress on that, honest), I happened to notice that\nequivclass.c is leaving some money on the table by generating\nredundant RestrictInfo clauses. It already attempts to not generate\nthe same clause twice, which can save a nontrivial amount of work\nbecause we cache selectivity estimates and so on per-RestrictInfo.\nI realized though that it will build and return a clause like\n\"a.x = b.y\" even if it already has \"b.y = a.x\". This is just\nwasteful. It's always been the case that equivclass.c will\nproduce clauses that are ordered according to its own whims.\nConsumers that need the operands in a specific order, such as\nindex scans or hash joins, are required to commute the clause\nto be the way they want it while building the finished plan.\nTherefore, it shouldn't matter which order of the operands we\nreturn, and giving back the commutator clause if available could\npotentially save as much as half of the selectivity-estimation\nwork we do with these clauses subsequently.\n\nHence, PFA a patch that adjusts create_join_clause() to notice\ncommuted as well as exact matches among the EquivalenceClass's\nexisting clauses. This results in a number of changes visible in\nregression test cases, but they're all clearly inconsequential.\n\nThe only thing that I think might be controversial here is that\nI dropped the check for matching operator OID. To preserve that,\nwe'd have needed to use get_commutator() in the reverse-match cases,\nwhich it seemed to me would be a completely unjustified expenditure\nof cycles. The operators we select for freshly-generated clauses\nwill certainly always match those of previously-generated clauses.\nMaybe there's a chance that they'd not match those of ec_sources\nclauses (that is, the user-written clauses we started from), but\nif they don't and that actually makes any difference then surely\nwe are talking about a buggy opclass definition.\n\nI've not bothered to make any performance tests to see if there's\nactually an easily measurable gain here. Saving some duplicative\nselectivity estimates could be down in the noise ... but it's\nsurely worth the tiny number of extra tests added here.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 25 Oct 2022 18:09:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Reducing duplicativeness of EquivalenceClass-derived clauses"
},
{
"msg_contents": "On Wed, Oct 26, 2022 at 6:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> While fooling with my longstanding outer-join variables changes\n> (I am making progress on that, honest), I happened to notice that\n> equivclass.c is leaving some money on the table by generating\n> redundant RestrictInfo clauses. It already attempts to not generate\n> the same clause twice, which can save a nontrivial amount of work\n> because we cache selectivity estimates and so on per-RestrictInfo.\n> I realized though that it will build and return a clause like\n> \"a.x = b.y\" even if it already has \"b.y = a.x\". This is just\n> wasteful. It's always been the case that equivclass.c will\n> produce clauses that are ordered according to its own whims.\n> Consumers that need the operands in a specific order, such as\n> index scans or hash joins, are required to commute the clause\n> to be the way they want it while building the finished plan.\n> Therefore, it shouldn't matter which order of the operands we\n> return, and giving back the commutator clause if available could\n> potentially save as much as half of the selectivity-estimation\n> work we do with these clauses subsequently.\n>\n> Hence, PFA a patch that adjusts create_join_clause() to notice\n> commuted as well as exact matches among the EquivalenceClass's\n> existing clauses. This results in a number of changes visible in\n> regression test cases, but they're all clearly inconsequential.\n\n\nI think there is no problem with this idea, given the operands of\nEC-derived clauses are commutative, and it seems no one would actually\nrely on the order of the operands. I can see hashjoin/mergejoin would\ncommute hash/merge joinclauses if needed with get_switched_clauses().\n\n\n>\n> The only thing that I think might be controversial here is that\n> I dropped the check for matching operator OID. To preserve that,\n> we'd have needed to use get_commutator() in the reverse-match cases,\n> which it seemed to me would be a completely unjustified expenditure\n> of cycles. The operators we select for freshly-generated clauses\n> will certainly always match those of previously-generated clauses.\n> Maybe there's a chance that they'd not match those of ec_sources\n> clauses (that is, the user-written clauses we started from), but\n> if they don't and that actually makes any difference then surely\n> we are talking about a buggy opclass definition.\n\n\nThe operator is chosen according to the two given EC members's data\ntype. Since we are dealing with the same pair of EC members, I think\nthe operator is always the same one. So it also seems no problem to drop\nthe check for operator. I wonder if we can even add an assertion if\nwe've found a RestrictInfo from ec_derives that the operator matches.\n\nThanks\nRichard\n\nOn Wed, Oct 26, 2022 at 6:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:While fooling with my longstanding outer-join variables changes\n(I am making progress on that, honest), I happened to notice that\nequivclass.c is leaving some money on the table by generating\nredundant RestrictInfo clauses. It already attempts to not generate\nthe same clause twice, which can save a nontrivial amount of work\nbecause we cache selectivity estimates and so on per-RestrictInfo.\nI realized though that it will build and return a clause like\n\"a.x = b.y\" even if it already has \"b.y = a.x\". This is just\nwasteful. It's always been the case that equivclass.c will\nproduce clauses that are ordered according to its own whims.\nConsumers that need the operands in a specific order, such as\nindex scans or hash joins, are required to commute the clause\nto be the way they want it while building the finished plan.\nTherefore, it shouldn't matter which order of the operands we\nreturn, and giving back the commutator clause if available could\npotentially save as much as half of the selectivity-estimation\nwork we do with these clauses subsequently.\n\nHence, PFA a patch that adjusts create_join_clause() to notice\ncommuted as well as exact matches among the EquivalenceClass's\nexisting clauses. This results in a number of changes visible in\nregression test cases, but they're all clearly inconsequential. I think there is no problem with this idea, given the operands ofEC-derived clauses are commutative, and it seems no one would actuallyrely on the order of the operands. I can see hashjoin/mergejoin wouldcommute hash/merge joinclauses if needed with get_switched_clauses(). \n\nThe only thing that I think might be controversial here is that\nI dropped the check for matching operator OID. To preserve that,\nwe'd have needed to use get_commutator() in the reverse-match cases,\nwhich it seemed to me would be a completely unjustified expenditure\nof cycles. The operators we select for freshly-generated clauses\nwill certainly always match those of previously-generated clauses.\nMaybe there's a chance that they'd not match those of ec_sources\nclauses (that is, the user-written clauses we started from), but\nif they don't and that actually makes any difference then surely\nwe are talking about a buggy opclass definition. The operator is chosen according to the two given EC members's datatype. Since we are dealing with the same pair of EC members, I thinkthe operator is always the same one. So it also seems no problem to dropthe check for operator. I wonder if we can even add an assertion ifwe've found a RestrictInfo from ec_derives that the operator matches.ThanksRichard",
"msg_date": "Wed, 26 Oct 2022 19:04:56 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing duplicativeness of EquivalenceClass-derived clauses"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Wed, Oct 26, 2022 at 6:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The only thing that I think might be controversial here is that\n>> I dropped the check for matching operator OID. To preserve that,\n>> we'd have needed to use get_commutator() in the reverse-match cases,\n>> which it seemed to me would be a completely unjustified expenditure\n>> of cycles. The operators we select for freshly-generated clauses\n>> will certainly always match those of previously-generated clauses.\n>> Maybe there's a chance that they'd not match those of ec_sources\n>> clauses (that is, the user-written clauses we started from), but\n>> if they don't and that actually makes any difference then surely\n>> we are talking about a buggy opclass definition.\n\n> The operator is chosen according to the two given EC members's data\n> type. Since we are dealing with the same pair of EC members, I think\n> the operator is always the same one. So it also seems no problem to drop\n> the check for operator. I wonder if we can even add an assertion if\n> we've found a RestrictInfo from ec_derives that the operator matches.\n\nYeah, I considered that --- even if somehow an ec_sources entry isn't\nan exact match, ec_derives ought to be. However, it still didn't seem\nworth a get_commutator() call. We'd basically be expending cycles to\ncheck that select_equality_operator yields the same result with the same\ninputs as it did before, and that doesn't seem terribly interesting to\ncheck. I'm also not sure what's the point of allowing divergence\nfrom the requested operator in some but not all paths.\n\nI added a bit of instrumentation to count how many times we need to build\nnew join clauses in create_join_clause. In the current core regression\ntests, I see this change reducing the number of new join clauses built\nhere from 9673 to 5142 (out of 26652 calls). So not quite 50% savings,\nbut pretty close to it. That should mean that this change is about\na wash just in terms of the code it touches directly: each iteration\nof the search loops is nearly twice as expensive as before, but we'll\nonly need to do about half as many. So whatever we save downstream\nis pure gravy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Oct 2022 09:54:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing duplicativeness of EquivalenceClass-derived clauses"
},
{
"msg_contents": "HI,\n\nOn Oct 26, 2022, 06:09 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n> While fooling with my longstanding outer-join variables changes\n> (I am making progress on that, honest), I happened to notice that\n> equivclass.c is leaving some money on the table by generating\n> redundant RestrictInfo clauses. It already attempts to not generate\n> the same clause twice, which can save a nontrivial amount of work\n> because we cache selectivity estimates and so on per-RestrictInfo.\n> I realized though that it will build and return a clause like\n> \"a.x = b.y\" even if it already has \"b.y = a.x\". This is just\n> wasteful. It's always been the case that equivclass.c will\n> produce clauses that are ordered according to its own whims.\n> Consumers that need the operands in a specific order, such as\n> index scans or hash joins, are required to commute the clause\n> to be the way they want it while building the finished plan.\n> Therefore, it shouldn't matter which order of the operands we\n> return, and giving back the commutator clause if available could\n> potentially save as much as half of the selectivity-estimation\n> work we do with these clauses subsequently.\n>\n> Hence, PFA a patch that adjusts create_join_clause() to notice\n> commuted as well as exact matches among the EquivalenceClass's\n> existing clauses. This results in a number of changes visible in\n> regression test cases, but they're all clearly inconsequential.\n>\n> The only thing that I think might be controversial here is that\n> I dropped the check for matching operator OID. To preserve that,\n> we'd have needed to use get_commutator() in the reverse-match cases,\n> which it seemed to me would be a completely unjustified expenditure\n> of cycles. The operators we select for freshly-generated clauses\n> will certainly always match those of previously-generated clauses.\n> Maybe there's a chance that they'd not match those of ec_sources\n> clauses (that is, the user-written clauses we started from), but\n> if they don't and that actually makes any difference then surely\n> we are talking about a buggy opclass definition.\n>\n> I've not bothered to make any performance tests to see if there's\n> actually an easily measurable gain here. Saving some duplicative\n> selectivity estimates could be down in the noise ... but it's\n> surely worth the tiny number of extra tests added here.\n>\n> Comments?\n>\n> regards, tom lane\n>\nMake sense.\n\nHow about combine ec->ec_sources and ec->derives as one list for less codes?\n\n```\nforeach(lc, list_union(ec->ec_sources, ec->ec_derives))\n{\n rinfo = (RestrictInfo *) lfirst(lc);\n if (rinfo->left_em == leftem &&\n rinfo->right_em == rightem &&\n rinfo->parent_ec == parent_ec)\n return rinfo;\n if (rinfo->left_em == rightem &&\n rinfo->right_em == leftem &&\n rinfo->parent_ec == parent_ec)\n return rinfo;\n}\n```\nI have a try, it will change some in join.out and avoid changes in tidscan.out.\n\nRegards,\nZhang Mingli\n>\n\n\n\n\n\n\n\nHI,\n\nOn Oct 26, 2022, 06:09 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\nWhile fooling with my longstanding outer-join variables changes\n(I am making progress on that, honest), I happened to notice that\nequivclass.c is leaving some money on the table by generating\nredundant RestrictInfo clauses. It already attempts to not generate\nthe same clause twice, which can save a nontrivial amount of work\nbecause we cache selectivity estimates and so on per-RestrictInfo.\nI realized though that it will build and return a clause like\n\"a.x = b.y\" even if it already has \"b.y = a.x\". This is just\nwasteful. It's always been the case that equivclass.c will\nproduce clauses that are ordered according to its own whims.\nConsumers that need the operands in a specific order, such as\nindex scans or hash joins, are required to commute the clause\nto be the way they want it while building the finished plan.\nTherefore, it shouldn't matter which order of the operands we\nreturn, and giving back the commutator clause if available could\npotentially save as much as half of the selectivity-estimation\nwork we do with these clauses subsequently.\n\nHence, PFA a patch that adjusts create_join_clause() to notice\ncommuted as well as exact matches among the EquivalenceClass's\nexisting clauses. This results in a number of changes visible in\nregression test cases, but they're all clearly inconsequential.\n\nThe only thing that I think might be controversial here is that\nI dropped the check for matching operator OID. To preserve that,\nwe'd have needed to use get_commutator() in the reverse-match cases,\nwhich it seemed to me would be a completely unjustified expenditure\nof cycles. The operators we select for freshly-generated clauses\nwill certainly always match those of previously-generated clauses.\nMaybe there's a chance that they'd not match those of ec_sources\nclauses (that is, the user-written clauses we started from), but\nif they don't and that actually makes any difference then surely\nwe are talking about a buggy opclass definition.\n\nI've not bothered to make any performance tests to see if there's\nactually an easily measurable gain here. Saving some duplicative\nselectivity estimates could be down in the noise ... but it's\nsurely worth the tiny number of extra tests added here.\n\nComments?\n\nregards, tom lane\n\nMake sense.\n\nHow about combine ec->ec_sources and ec->derives as one list for less codes? \n\n```\nforeach(lc, list_union(ec->ec_sources, ec->ec_derives))\n{\n rinfo = (RestrictInfo *) lfirst(lc);\n if (rinfo->left_em == leftem &&\n rinfo->right_em == rightem &&\n rinfo->parent_ec == parent_ec)\n return rinfo;\n if (rinfo->left_em == rightem &&\n rinfo->right_em == leftem &&\n rinfo->parent_ec == parent_ec)\n return rinfo;\n}\n```\nI have a try, it will change some in join.out and avoid changes in tidscan.out.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Thu, 27 Oct 2022 21:21:17 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing duplicativeness of EquivalenceClass-derived\n clauses"
},
{
"msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> How about combine ec->ec_sources and ec->derives as one list for less codes?\n\nKeeping them separate is required for the broken-EC code paths.\nEven if it weren't, I wouldn't merge them just to save a couple\nof lines of code --- I think it's useful to be able to tell which\nclauses the EC started from.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Oct 2022 09:29:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing duplicativeness of EquivalenceClass-derived clauses"
},
{
"msg_contents": "Hi,\n\nOn Oct 27, 2022, 21:29 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n> Zhang Mingli <zmlpostgres@gmail.com> writes:\n> > How about combine ec->ec_sources and ec->derives as one list for less codes?\n>\n> Keeping them separate is required for the broken-EC code paths.\n> Even if it weren't, I wouldn't merge them just to save a couple\n> of lines of code --- I think it's useful to be able to tell which\n> clauses the EC started from.\n>\n> regards, tom lane\nGot it, thanks.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nOn Oct 27, 2022, 21:29 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\nZhang Mingli <zmlpostgres@gmail.com> writes:\nHow about combine ec->ec_sources and ec->derives as one list for less codes?\n\nKeeping them separate is required for the broken-EC code paths.\nEven if it weren't, I wouldn't merge them just to save a couple\nof lines of code --- I think it's useful to be able to tell which\nclauses the EC started from.\n\nregards, tom lane\nGot it, thanks.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Thu, 27 Oct 2022 21:37:04 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing duplicativeness of EquivalenceClass-derived\n clauses"
}
] |
[
{
"msg_contents": "typedef struct A_Expr\n\n\n\n{\n\n\n\n pg_node_attr(custom_read_write)\n\n\n\n NodeTag type;\n\n\n\n A_Expr_Kind kind; /* see above */\n\n\n\n List *name; /* possibly-qualified name of operator */\n\n\n\n Node *lexpr; /* left argument, or NULL if none */\n\n\n\n Node *rexpr; /* right argument, or NULL if none */\n\n\n\n int location; /* token location, or -1 if unknown */\n\n\n\n} A_Expr;\n\n\n\nI run a sql like select a,b from t3 where a > 1; and I get the parseTree for selectStmt:\n\n\n\nwhy the name is '-' but not '>'?\n\n\n\n\n\n\n\njacktby@gmail.com",
"msg_date": "Wed, 26 Oct 2022 17:13:49 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "confused with name in the pic"
},
{
"msg_contents": "On Wed, Oct 26, 2022 at 2:13 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n\n> typedef struct A_Expr\n>\n>\n>\n> {\n>\n>\n>\n> pg_node_attr(custom_read_write)\n>\n>\n>\n> NodeTag type;\n>\n>\n>\n> A_Expr_Kind kind; /* see above */\n>\n>\n>\n> List *name; /* possibly-qualified name of operator */\n>\n>\n>\n> Node *lexpr; /* left argument, or NULL if none */\n>\n>\n>\n> Node *rexpr; /* right argument, or NULL if none */\n>\n>\n>\n> int location; /* token location, or -1 if unknown */\n>\n>\n>\n> } A_Expr;\n>\n>\n>\n> I run a sql like select a,b from t3 where a > 1; and I get the parseTree\n> for selectStmt:\n>\n>\n>\n> why the name is '-' but not '>'?\n>\n>\nGiven the general lack of interest in this so far I'd suggest you put\ntogether a minimal test case that includes a simple print-to-log command\npatched into HEAD showing the problematic parse tree in its internal form.\n\nPosting an image of some custom visualization that seems evidently produced\nby custom code that itself may be buggy against an unspecified server with\nno indication how any of this all works doesn't seem like enough detail and\nthere is little reason to think that such an obvious bug could exist. I\ndo agree that your expectation seems quite sound. Though I do not have the\nfaintest idea how to actually go about reproducing your result even in the\nminimal way described above (though I know enough to know it is possible,\njust not where to patch).\n\nDavid J.\n\nOn Wed, Oct 26, 2022 at 2:13 AM jacktby@gmail.com <jacktby@gmail.com> wrote:typedef struct A_Expr\n\n\n\n{\n\n\n\n pg_node_attr(custom_read_write)\n\n\n\n NodeTag type;\n\n\n\n A_Expr_Kind kind; /* see above */\n\n\n\n List *name; /* possibly-qualified name of operator */\n\n\n\n Node *lexpr; /* left argument, or NULL if none */\n\n\n\n Node *rexpr; /* right argument, or NULL if none */\n\n\n\n int location; /* token location, or -1 if unknown */\n\n\n\n} A_Expr;\n\n\n\nI run a sql like select a,b from t3 where a > 1; and I get the parseTree for selectStmt:\n\n\n\nwhy the name is '-' but not '>'?Given the general lack of interest in this so far I'd suggest you put together a minimal test case that includes a simple print-to-log command patched into HEAD showing the problematic parse tree in its internal form.Posting an image of some custom visualization that seems evidently produced by custom code that itself may be buggy against an unspecified server with no indication how any of this all works doesn't seem like enough detail and there is little reason to think that such an obvious bug could exist. I do agree that your expectation seems quite sound. Though I do not have the faintest idea how to actually go about reproducing your result even in the minimal way described above (though I know enough to know it is possible, just not where to patch).David J.",
"msg_date": "Wed, 26 Oct 2022 16:55:29 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: confused with name in the pic"
}
] |
[
{
"msg_contents": "As part of the AIO work [1], there are quite a number of dlist_heads\nwhich a counter is used to keep track on how many items are in the\nlist. We also have a few places in master which do the same thing.\n\nIn order to tidy this up and to help ensure that the count variable\ndoes not get out of sync with the items which are stored in the list,\nhow about we introduce \"dclist\" which maintains the count for us?\n\nI've attached a patch which does this. The majority of the functions\nfor the new type are just wrappers around the equivalent dlist\nfunction.\n\ndclist provides all of the functionality that dlist does except\nthere's no dclist_delete() function. dlist_delete() can be done by\njust knowing the element to delete and not the list that the element\nbelongs to. With dclist, that's not possible as we must also subtract\n1 from the count variable and obviously we need the dclist_head for\nthat.\n\nI'll add this to the November commitfest.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/20210223100344.llw5an2aklengrmn@alap3.anarazel.de",
"msg_date": "Thu, 27 Oct 2022 16:35:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adding doubly linked list type which stores the number of items in\n the list"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 9:05 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> As part of the AIO work [1], there are quite a number of dlist_heads\n> which a counter is used to keep track on how many items are in the\n> list. We also have a few places in master which do the same thing.\n>\n> In order to tidy this up and to help ensure that the count variable\n> does not get out of sync with the items which are stored in the list,\n> how about we introduce \"dclist\" which maintains the count for us?\n>\n> I've attached a patch which does this. The majority of the functions\n> for the new type are just wrappers around the equivalent dlist\n> function.\n>\n> dclist provides all of the functionality that dlist does except\n> there's no dclist_delete() function. dlist_delete() can be done by\n> just knowing the element to delete and not the list that the element\n> belongs to. With dclist, that's not possible as we must also subtract\n> 1 from the count variable and obviously we need the dclist_head for\n> that.\n>\n> [1] https://www.postgresql.org/message-id/flat/20210223100344.llw5an2aklengrmn@alap3.anarazel.de\n\n+1. Using dlist_head in dclist_head enables us to reuse dlist_* functions.\n\nSome comments on the patch:\n1. I think it's better to just return dlist_is_empty(&head->dlist) &&\n(head->count == 0); from dclist_is_empty() and remove the assert for\nbetter readability and safety against count being zero.\n\n2. Missing dlist_is_memberof() in dclist_delete_from()?\n\n3. Just thinking if we need to move dlist_is_memberof() check from\ndclist_* functions to dlist_* functions, because they also need such\ninsurance against callers passing spurious nodes.\n\n4. More opportunities to use dclist_* in below places, no?\n dlist_push_tail(&src->mappings, &pmap->node);\n src->num_mappings++;\n\n dlist_push_head(&MXactCache, &entry->node);\n if (MXactCacheMembers++ >= MAX_CACHE_ENTRIES)\n\n5. dlist_is_memberof() - do we need this at all? We trust the callers\nof dlist_* today that the passed in node belongs to the list, no?\n\n6. If we decide to have dlist_is_memberof() and the asserts around it,\ncan we design it on the similar lines as dlist_check() to avoid many\n#ifdef ILIST_DEBUG #endif blocks spreading around the code?\n\n7. Do we need Assert(head->count > 0); in more places like dclist_delete_from()?\n\n8. Don't we need dlist_container(), dlist_head_element(),\ndlist_tail_element() for dclist_*? Even though, we might not use them\nimmediately, just for the sake for completeness of dclist data\nstructure.\n\n> I'll add this to the November commitfest.\n\nYes, please.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 27 Oct 2022 12:02:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "Thank you for having a look at this.\n\nOn Thu, 27 Oct 2022 at 19:32, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Some comments on the patch:\n> 1. I think it's better to just return dlist_is_empty(&head->dlist) &&\n> (head->count == 0); from dclist_is_empty() and remove the assert for\n> better readability and safety against count being zero.\n\nI don't think that's a good change. For 1) it adds unnecessary\noverhead due to the redundant checks and 2) it removes the Assert\nwhich is our early warning that the dclist's count is getting out of\nsync somewhere.\n\n> 2. Missing dlist_is_memberof() in dclist_delete_from()?\n\nI put that in dlist_delete_from() which is called from dclist_delete_from().\n\n> 3. Just thinking if we need to move dlist_is_memberof() check from\n> dclist_* functions to dlist_* functions, because they also need such\n> insurance against callers passing spurious nodes.\n\nI think the affected functions there would be; dlist_move_head(),\ndlist_move_tail(), dlist_has_next(), dlist_has_prev(),\ndlist_next_node() and dlist_prev_node(). I believe if we did that then\nit's effectively an API change. The comments only claim that it's\nundefined if node is not a member of the list. It does not say 'node'\n*must* be part of the list. Now, perhaps doing this would just make\nit more likely that we'd find bugs in our code and extension authors\nwould find bugs in their code, but it does move the bar.\ndlist_move_head and dlist_move_tail look like they'd work perfectly\nwell to remove an item from 1 list and put it on the head or tail of\nsome completely different list. Should we really be changing that in a\npatch that is meant to just add the dclist type?\n\n> 4. More opportunities to use dclist_* in below places, no?\n> dlist_push_tail(&src->mappings, &pmap->node);\n> src->num_mappings++;\n>\n> dlist_push_head(&MXactCache, &entry->node);\n> if (MXactCacheMembers++ >= MAX_CACHE_ENTRIES)\n\nThanks for finding those. I've adjusted them both to use dclists.\n\n> 5. dlist_is_memberof() - do we need this at all? We trust the callers\n> of dlist_* today that the passed in node belongs to the list, no?\n\nhmm, this seems to contradict your #3?\n\nIf you look at something like dlist_move_head(), if someone calls that\nand passes a 'node' that does not belong to 'head' then the result of\nthat is that we delete 'node' from whichever dlist that it's on and\npush it onto 'head'. Nothing bad happens there. If we do the same on\na dclist then the count gets out of sync. That's bad as it could lead\nto assert failures and bugs.\n\n> 6. If we decide to have dlist_is_memberof() and the asserts around it,\n> can we design it on the similar lines as dlist_check() to avoid many\n> #ifdef ILIST_DEBUG #endif blocks spreading around the code?\n\nOK, that likely is a better idea. I've done this in the attached by\nway of dlist_member_check()\n\n> 7. Do we need Assert(head->count > 0); in more places like dclist_delete_from()?\n\nI guess it does no harm. I've added some additional ones in the attached.\n\n> 8. Don't we need dlist_container(), dlist_head_element(),\n> dlist_tail_element() for dclist_*? Even though, we might not use them\n> immediately, just for the sake for completeness of dclist data\n> structure.\n\nOK, I think I'd left those because dclist_container() would just be\nthe same as dlist_container(), but that's not the case for the other\ntwo, so I've added all 3.\n\nOne additional change is that I also ended up removing the use of\ndclist that I had in the previous patch for ReorderBufferTXN.subtxns.\nLooking more closely at the code in ReorderBufferAssignChild():\n\n/*\n* We already saw this transaction, but initially added it to the\n* list of top-level txns. Now that we know it's not top-level,\n* remove it from there.\n*/\ndlist_delete(&subtxn->node);\n\nThe problem is that since ReorderBufferTXN is used for both\ntransactions and sub-transactions that it's not easy to determine if\nthe ReorderBufferTXN.node is part of the ReorderBuffer.toplevel_by_lsn\ndlist or the ReorderBufferTXN.subtxns. It seems safer just to leave\nthis one alone.\n\nDavid",
"msg_date": "Fri, 28 Oct 2022 18:31:45 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 11:01 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> > 3. Just thinking if we need to move dlist_is_memberof() check from\n> > dclist_* functions to dlist_* functions, because they also need such\n> > insurance against callers passing spurious nodes.\n>\n> I think the affected functions there would be; dlist_move_head(),\n> dlist_move_tail(), dlist_has_next(), dlist_has_prev(),\n> dlist_next_node() and dlist_prev_node(). I believe if we did that then\n> it's effectively an API change. The comments only claim that it's\n> undefined if node is not a member of the list. It does not say 'node'\n> *must* be part of the list. Now, perhaps doing this would just make\n> it more likely that we'd find bugs in our code and extension authors\n> would find bugs in their code, but it does move the bar.\n> dlist_move_head and dlist_move_tail look like they'd work perfectly\n> well to remove an item from 1 list and put it on the head or tail of\n> some completely different list. Should we really be changing that in a\n> patch that is meant to just add the dclist type?\n\nHm. Let's not touch that here.\n\nThanks for the patch. Few more comments on v2:\n1. I guess we need to cast the 'node' parameter too, something like\nbelow? I'm reading the comment there talking about compilers\ncomplaning about the unused function arguments.\ndlist_member_check(head, node) ((void) (head); (void) (node);)\n\n2.\n+ * maximum of UINT32 elements. It is up to the caller to ensure no more than\n+ * this many items are added to a dclist.\nCan we put max limit, at least in assert, something like below, on\n'count' instead of saying above? I'm not sure if there's someone\nstoring 4 billion items, but it will be a good-to-have safety from the\ndata structure perspective if others think it's not an overkill.\nAssert(head->count > 0 && head->count <= PG_UINT32_MAX);\n\n3. I guess, we can split up the patches for the ease of review, 0001\nintroducing dclist_* data structure and 0002 using it. It's not\nmandatory though. The two patches can go separately if needed.\n\n4.\n+/*\n+ * As dlist_delete but performs checks in ILIST_DEBUG to ensure that 'node'\n+ * belongs to 'head'.\nI think 'Same as dlist_delete' instead of just 'As dlist_delete'\n\n5.\n+ * Caution: 'node' must be a member of 'head'.\n+ * Caller must ensure that 'before' is a member of 'head'.\nCan we have the same comments around something like below?\n+ * Caller must ensure that 'node' must be a member of 'head'.\n+ * Caller must ensure that 'before' is a member of 'head'.\nor\n+ * Caution: 'node' must be a member of 'head'.\n+ * Caution: 'before' must be a member of 'head'.\nor\n * Caution: unreliable if 'node' is not in the list.\n * Caution: unreliable if 'before' is not in the list.\n\n6.\n+dclist_has_prev(dclist_head *head, dlist_node *node)\n+{\n+ dlist_member_check(&head->dlist, node);\n+\n+ Assert(head->count > 0);\n\n+ Assert(head->count > 0);\n+\n+ return (dlist_node *) dlist_head_element_off(&head->dlist, 0);\n\n+ Assert(head->count > 0);\n+\n+ return (dlist_node *) dlist_tail_element_off(&head->dlist, 0);\n\n+ Assert(!dclist_is_empty(head));\n+ return (char *) head->dlist.head.next - off;\n\n+ dlist_member_check(&head->dlist, node);\n+\n+ Assert(head->count > 0);\n+\n+ return dlist_has_prev(&head->dlist, node);\n\n+ dlist_member_check(&head->dlist, node);\n+ Assert(head->count > 0);\n+\n+ return dlist_has_next(&head->dlist, node);\n\nRemove extra lines in between and have them uniformly across,\nsomething like below?\n\ndlist_member_check();\nAssert();\n\nreturn XXX;\n\n8. Wondering if we need dlist_delete_from() at all. Can't we just add\ndlist_member_check() dclist_delete_from() and call dlist_delete()\ndirectly?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 29 Oct 2022 11:02:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "Thank you for having another look at this\n\nOn Sat, 29 Oct 2022 at 18:32, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> 1. I guess we need to cast the 'node' parameter too, something like\n> below? I'm reading the comment there talking about compilers\n> complaning about the unused function arguments.\n> dlist_member_check(head, node) ((void) (head); (void) (node);)\n\nI looked at dlist_check() and I didn't quite manage to figure out why\nthe cast is needed. As far as I can see, there are no calls where we\nonly pass dlist_head solely for the dlist_check(). For\ndlist_member_check(), dlist_delete_from() does not use the 'head'\nparameter for anything apart from dlist_member_check(), so I believe\nthe cast is required for 'head'. I think I'd rather only add the cast\nfor 'node' unless we really require it. Cargo-culting it in there just\nbecause that's what the other macros do does not seem like a good idea\nto me.\n\n> Can we put max limit, at least in assert, something like below, on\n> 'count' instead of saying above? I'm not sure if there's someone\n> storing 4 billion items, but it will be a good-to-have safety from the\n> data structure perspective if others think it's not an overkill.\n> Assert(head->count > 0 && head->count <= PG_UINT32_MAX);\n\n'count' is a uint32. It's always going to be <= PG_UINT32_MAX.\n\nMy original thoughts were that it seems unlikely we'd ever give an\nassert build a workload that would ever have 2^32 dlist_nodes in\ndclist. Having said that, perhaps it would do no harm to add some\noverflow checks to 'count'. I've gone and added some\nAssert(head->count > 0) after we do count++.\n\n> + * As dlist_delete but performs checks in ILIST_DEBUG to ensure that 'node'\n> + * belongs to 'head'.\n> I think 'Same as dlist_delete' instead of just 'As dlist_delete'\n\nI don't really see what's wrong with this. We use \"As above\" when we\nmean \"Same as above\" in many locations. Anyway, I don't feel strongly\nabout not adding the word, so I've adjusted the wording in that\ncomment which includes adding the word \"Same\" at the start.\n\n> 5.\n> + * Caution: 'node' must be a member of 'head'.\n> + * Caller must ensure that 'before' is a member of 'head'.\n> Can we have the same comments around something like below?\n\nI've adjusted dclist_insert_after() and dclist_insert_before(). Each\ndclist function that uses dlist_member_check() now has the same text.\n\n> 8. Wondering if we need dlist_delete_from() at all. Can't we just add\n> dlist_member_check() dclist_delete_from() and call dlist_delete()\n> directly?\n\nCertainly, but I made it that way on purpose. I wanted dclist to have\na superset of the functions that dlist has. I just see no reason why\ndlist shouldn't have dlist_delete_from() when dclist has it.\n\nI've attached the v3 version of the patch which includes some\nadditional polishing work.\n\nDavid",
"msg_date": "Mon, 31 Oct 2022 15:56:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 8:26 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I looked at dlist_check() and I didn't quite manage to figure out why\n> the cast is needed. As far as I can see, there are no calls where we\n> only pass dlist_head solely for the dlist_check(). For\n> dlist_member_check(), dlist_delete_from() does not use the 'head'\n> parameter for anything apart from dlist_member_check(), so I believe\n> the cast is required for 'head'. I think I'd rather only add the cast\n> for 'node' unless we really require it. Cargo-culting it in there just\n> because that's what the other macros do does not seem like a good idea\n> to me.\n\nHm, you're right, dlist_member_check() needs it. Also, slist_check()\nneeds it for slist_has_next(). dlist_check() doesn't need it, however,\nkeeping it intact doesn't harm, I guess.\n\n> My original thoughts were that it seems unlikely we'd ever give an\n> assert build a workload that would ever have 2^32 dlist_nodes in\n> dclist. Having said that, perhaps it would do no harm to add some\n> overflow checks to 'count'. I've gone and added some\n> Assert(head->count > 0) after we do count++.\n\nSo, when an overflow occurs, the head->count wraps around after\nPG_UINT32_MAX, meaning, becomes 0 and we will catch it in an assert\nbuild. This looks reasonable to me. However, the responsibility lies\nwith the developers to deal with such overflows.\n\n> > 8. Wondering if we need dlist_delete_from() at all. Can't we just add\n> > dlist_member_check() dclist_delete_from() and call dlist_delete()\n> > directly?\n>\n> Certainly, but I made it that way on purpose. I wanted dclist to have\n> a superset of the functions that dlist has. I just see no reason why\n> dlist shouldn't have dlist_delete_from() when dclist has it.\n\nOkay.\n\n> I've attached the v3 version of the patch which includes some\n> additional polishing work.\n\nThanks. The v3 patch looks good to me.\n\nBTW, do we need sclist_* as well? AFAICS, no such use-case exists\nneeding slist and element count, maybe we don't need.\n\nI'm wondering if adding count to dlist_head and maintaining it as part\nof the existing dlist_* data structure and functions is any better\nthan introducing dclist_*? In that case, we need only one function,\ndlist_count, no? Or do we choose to go dclist_* because we want to\navoid the extra cost of maintaining count within dlist_*? If yes, is\nmaintaining count in dlist_* really costly?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:35:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Mon, 31 Oct 2022 at 19:05, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> So, when an overflow occurs, the head->count wraps around after\n> PG_UINT32_MAX, meaning, becomes 0 and we will catch it in an assert\n> build. This looks reasonable to me. However, the responsibility lies\n> with the developers to deal with such overflows.\n\nI'm not sure what the alternatives are here. One of the advantages of\ndlist over List is that there are no memory allocations to add a node\nwhich is already allocated onto a list. lappend() might need to\nenlarge the list, which means you can't do that in a critical section.\nIt's currently OK to add an item to a dlist in a critical section,\nhowever, if we add an elog(ERROR) then it won't be. The best I think\nwe can do is to just let the calling code ensure that it only uses\ndlist when it's certain that there can't be more than 2^32 items to\nstore at once.\n\nAdditionally, everywhere I've replaced dlist with dclist in the patch\nused either an int or uint32 for the counter. There was no code which\nchecked if the existing counter had wrapped.\n\n> Thanks. The v3 patch looks good to me.\n\nGreat. Thanks for having a look.\n\n> BTW, do we need sclist_* as well? AFAICS, no such use-case exists\n> needing slist and element count, maybe we don't need.\n\nI don't see anywhere that requires it.\n\n> I'm wondering if adding count to dlist_head and maintaining it as part\n> of the existing dlist_* data structure and functions is any better\n> than introducing dclist_*? In that case, we need only one function,\n> dlist_count, no? Or do we choose to go dclist_* because we want to\n> avoid the extra cost of maintaining count within dlist_*? If yes, is\n> maintaining count in dlist_* really costly?\n\nI have a few reasons for not wanting to do that:\n\n1) I think dlist operations are very fast at the moment. The fact\nthat the functions are static inline tells me the function call\noverhead matters. Therefore, it's likely maintaining a count also\nmatters.\n2) Code bloat. The functions are static inline. That means all\ncompiled code that adds or removes an item from a dlist would end up\nlarger. That results in more instruction cache misses.\n3) I've no reason to believe that all call sites that do\ndlist_delete() have the ability to know which list the node is on.\nJust look at ReorderBufferAssignChild(). I decided to not convert the\nsubtxns dlist into a dclist as the subtransaction sometimes seems to\ngo onto the top transaction list before it's moved to the sub-txn's\nlist.\n4) There's very little or no scope for bugs in dclist relating to the\ndlist implementation as all that stuff is done by just calling the\ndlist_* functions. The only scope is really that it could call the\nwrong dlist_* function. It does not seem terribly hard to ensure we\ndon't write any bugs like that.\n\nDavid\n\n\n",
"msg_date": "Mon, 31 Oct 2022 20:14:16 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 12:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 31 Oct 2022 at 19:05, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > So, when an overflow occurs, the head->count wraps around after\n> > PG_UINT32_MAX, meaning, becomes 0 and we will catch it in an assert\n> > build. This looks reasonable to me. However, the responsibility lies\n> > with the developers to deal with such overflows.\n>\n> I'm not sure what the alternatives are here. One of the advantages of\n> dlist over List is that there are no memory allocations to add a node\n> which is already allocated onto a list. lappend() might need to\n> enlarge the list, which means you can't do that in a critical section.\n\nUsing uint64 is one option to allow many elements, however, I'm also\nfine with removing the overflow assertions Assert(head->count > 0);\n/* count overflow check */ altogether and let the callers take the\nresponsibility. dlist_* and dclist_* callers already have another\nresponsibility - Caution: 'foo' must be a member of 'head'.\n\n> It's currently OK to add an item to a dlist in a critical section,\n> however, if we add an elog(ERROR) then it won't be.\n\ndlist_check() and dlist_member_check() have an elog(ERROR) and the\nabove statement isn't true in case of ILIST_DEBUG-defined builds.\n\n> The best I think\n> we can do is to just let the calling code ensure that it only uses\n> dlist when it's certain that there can't be more than 2^32 items to\n> store at once.\n\nRight.\n\n+ * able to store a maximum of PG_UINT32_MAX elements. It is up to the caller\n+ * to ensure no more than this many items are added to a dclist.\n\nThe above comment seems fine to me, if we really want to enforce any\noverflow checks on non-debug, non-assert builds, it might add some\ncosts to dclist_* functions.\n\n> > I'm wondering if adding count to dlist_head and maintaining it as part\n> > of the existing dlist_* data structure and functions is any better\n> > than introducing dclist_*? In that case, we need only one function,\n> > dlist_count, no? Or do we choose to go dclist_* because we want to\n> > avoid the extra cost of maintaining count within dlist_*? If yes, is\n> > maintaining count in dlist_* really costly?\n>\n> I have a few reasons for not wanting to do that:\n>\n> 1) I think dlist operations are very fast at the moment. The fact\n> that the functions are static inline tells me the function call\n> overhead matters. Therefore, it's likely maintaining a count also\n> matters.\n> 2) Code bloat. The functions are static inline. That means all\n> compiled code that adds or removes an item from a dlist would end up\n> larger. That results in more instruction cache misses.\n\nThis seems a fair point to me.\n\n> 3) I've no reason to believe that all call sites that do\n> dlist_delete() have the ability to know which list the node is on.\n> Just look at ReorderBufferAssignChild(). I decided to not convert the\n> subtxns dlist into a dclist as the subtransaction sometimes seems to\n> go onto the top transaction list before it's moved to the sub-txn's\n> list.\n> 4) There's very little or no scope for bugs in dclist relating to the\n> dlist implementation as all that stuff is done by just calling the\n> dlist_* functions. The only scope is really that it could call the\n> wrong dlist_* function. It does not seem terribly hard to ensure we\n> don't write any bugs like that.\n\nRight.\n\n> > Thanks. The v3 patch looks good to me.\n>\n> Great. Thanks for having a look.\n\nI will take another look at v3 tomorrow and probably mark it RfC.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:52:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "Hi hackers,\n\n> I will take another look at v3 tomorrow and probably mark it RfC.\n\nI very much like the patch. While on it:\n\n```\n+static inline bool\n+dclist_is_empty(dclist_head *head)\n+{\n+ Assert(dlist_is_empty(&head->dlist) == (head->count == 0));\n+ return (head->count == 0);\n+}\n```\n\nShould we consider const'ifying the arguments of the dlist_*/dclist_*\nfunctions that don't change the arguments?\n\nAdditionally it doesn't seem that we have any unit tests for dlist /\ndclist. Should we consider adding unit tests for them to\nsrc/test/regress?\n\nTo clarify, IMO both questions are out of scope of this specific patch\nand should be submitted separately.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:28:22 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 6:58 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> > I will take another look at v3 tomorrow and probably mark it RfC.\n>\n> I very much like the patch. While on it:\n>\n> ```\n> +static inline bool\n> +dclist_is_empty(dclist_head *head)\n> +{\n> + Assert(dlist_is_empty(&head->dlist) == (head->count == 0));\n> + return (head->count == 0);\n> +}\n> ```\n>\n> Should we consider const'ifying the arguments of the dlist_*/dclist_*\n> functions that don't change the arguments?\n\n+1, but as a separate discussion/thread/patch IMO.\n\n> Additionally it doesn't seem that we have any unit tests for dlist /\n> dclist. Should we consider adding unit tests for them to\n> src/test/regress?\n\nMost of the dlist_* functions are being covered I guess. AFAICS,\ndclist_* functions that aren't covered are dclist_insert_after(),\ndclist_insert_before(), dclist_pop_head_node(), dclist_move_head(),\ndclist_move_tail(), dclist_has_next(), dclist_has_prev(),\ndclist_next_node(), dclist_prev_node(), dclist_head_element_off(),\ndclist_head_node(), dclist_tail_element_off(), dclist_head_element().\n\nIMO, adding an extension under src/test/modules to cover missing or\nall dlist_* and dclist_* functions makes sense. It improves the code\ncoverage. FWIW, test_lfind is one such recent test extension.\n\n> To clarify, IMO both questions are out of scope of this specific patch\n> and should be submitted separately.\n\nYou're right, both of them must be discussed separately.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 20:28:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 8:28 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 6:58 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > > I will take another look at v3 tomorrow and probably mark it RfC.\n> >\n> > I very much like the patch. While on it:\n\nI took another look at v3 patch today and it looked good to me, hence\nmarked it RfC - https://commitfest.postgresql.org/40/3967/\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 1 Nov 2022 13:25:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 20:55, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I took another look at v3 patch today and it looked good to me, hence\n> marked it RfC - https://commitfest.postgresql.org/40/3967/\n\nMany thanks for reviewing this.\n\nIf nobody has any objections, I plan to push this tomorrow morning New\nZealand time (around 10 hours from now).\n\nDavid\n\n\n",
"msg_date": "Tue, 1 Nov 2022 23:19:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 23:19, David Rowley <dgrowleyml@gmail.com> wrote:\n> If nobody has any objections, I plan to push this tomorrow morning New\n> Zealand time (around 10 hours from now).\n\nPushed. Thank you both for reviewing this.\n\nDavid\n\n\n",
"msg_date": "Wed, 2 Nov 2022 14:08:38 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "Hi David,\n\n> Pushed. Thank you both for reviewing this.\n\nThanks for applying the patch.\n\n>> Should we consider const'ifying the arguments of the dlist_*/dclist_*\n>> functions that don't change the arguments?\n>> [...]\n> You're right, both of them must be discussed separately.\n\nI would like to piggyback on this thread to propose the const'ifying\npatch, if that's OK. Here it is.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 2 Nov 2022 11:53:26 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 2:23 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi David,\n>\n> > Pushed. Thank you both for reviewing this.\n>\n> Thanks for applying the patch.\n>\n> >> Should we consider const'ifying the arguments of the dlist_*/dclist_*\n> >> functions that don't change the arguments?\n> >> [...]\n> > You're right, both of them must be discussed separately.\n>\n> I would like to piggyback on this thread to propose the const'ifying\n> patch, if that's OK. Here it is.\n\nThanks for the patch. IMO, this can be discussed in a separate thread\nto get more thoughts from the hackers.\n\nBTW, there's proclist_* data structure which might need the similar\nconst'ifying.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 3 Nov 2022 13:11:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "Hi Bharath,\n\n> Thanks for the patch. IMO, this can be discussed in a separate thread\n> to get more thoughts from the hackers.\n\nOK, I started a new thread [1], thanks.\n\nRegarding the improvement of the code coverage I realized that this\nmay be a good patch for a newcomer. I know several people who may be\ninterested in starting to contribute to PostgreSQL. Maybe I'll be able\nto find a volunteer.\n\n[1]: https://www.postgresql.org/message-id/flat/CAJ7c6TM2=08mNKD9aJg8vEY9hd+G4L7+Nvh30UiNT3kShgRgNg@mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 7 Nov 2022 12:06:58 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 2:37 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Regarding the improvement of the code coverage I realized that this\n> may be a good patch for a newcomer. I know several people who may be\n> interested in starting to contribute to PostgreSQL. Maybe I'll be able\n> to find a volunteer.\n\nHm. Or adding a ToDo item here https://wiki.postgresql.org/wiki/Todo\nmight also help?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 7 Nov 2022 15:30:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
},
{
"msg_contents": "Hi Bharath,\n\n> > Regarding the improvement of the code coverage I realized that this\n> > may be a good patch for a newcomer. I know several people who may be\n> > interested in starting to contribute to PostgreSQL. Maybe I'll be able\n> > to find a volunteer.\n>\n> Hm. Or adding a ToDo item here https://wiki.postgresql.org/wiki/Todo\n> might also help?\n\nGood point. Will it be better to use the \"Miscellaneous Other\" section\nfor this or create a new \"Code coverage\" section?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 7 Nov 2022 13:07:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding doubly linked list type which stores the number of items\n in the list"
}
] |
[
{
"msg_contents": "It is a common user annoyance to have a script fail because someone\nadded a VACUUM, especially when using --single-transaction option.\nFix, so that this works without issue:\n\nBEGIN;\n....\nVACUUM (ANALYZE) vactst;\n....\nCOMMIT;\n\nAllows both ANALYZE and vacuum of toast tables, but not VACUUM FULL.\n\nWhen in a xact block, we do not set PROC_IN_VACUUM,\nnor update datfrozenxid.\n\nTests, docs.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 27 Oct 2022 10:31:31 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, 27 Oct 2022 at 10:31, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> Tests, docs.\n\nThe patch tester says that a pg_upgrade test is failing on Windows,\nbut works for me.\n\nt/002_pg_upgrade.pl .. ok\n\nAnybody shed any light on that, much appreciated.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 27 Oct 2022 17:18:45 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 9:49 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 27 Oct 2022 at 10:31, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > Tests, docs.\n>\n> The patch tester says that a pg_upgrade test is failing on Windows,\n> but works for me.\n>\n> t/002_pg_upgrade.pl .. ok\n>\n> Anybody shed any light on that, much appreciated.\n\nPlease see a recent thread on pg_upgrade failure -\nhttps://www.postgresql.org/message-id/Y04mN0ZLNzJywrad%40paquier.xyz.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 27 Oct 2022 21:51:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 10:31:31AM +0100, Simon Riggs wrote:\n> Allows both ANALYZE and vacuum of toast tables, but not VACUUM FULL.\n\nMaybe I misunderstood what you meant: you said \"not VACUUM FULL\", but\nwith your patch, that works:\n\npostgres=# begin; VACUUM FULL pg_class; commit;\nBEGIN\nVACUUM\nCOMMIT\n\nActually, I've thought before that it was bit weird that CLUSTER can be\nrun within a transaction, but VACUUM FULL cannot (even though it does a\nCLUSTER behind the scenes). VACUUM FULL can process multiple relations,\nwhereas CLUSTER can't, but it seems nice to allow vacuum full for the\ncase of a single relation.\n\nI haven't checked the rest of the patch, but +1 for allowing VACUUM FULL\nwithin a user txn.\n\nMaybe the error message needs to be qualified \"...when multiple\nrelations are specified\".\n\nERROR: VACUUM cannot run inside a transaction block\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 27 Oct 2022 15:07:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, 27 Oct 2022 at 21:07, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Oct 27, 2022 at 10:31:31AM +0100, Simon Riggs wrote:\n> > Allows both ANALYZE and vacuum of toast tables, but not VACUUM FULL.\n>\n> Maybe I misunderstood what you meant: you said \"not VACUUM FULL\", but\n> with your patch, that works:\n>\n> postgres=# begin; VACUUM FULL pg_class; commit;\n> BEGIN\n> VACUUM\n> COMMIT\n>\n> Actually, I've thought before that it was bit weird that CLUSTER can be\n> run within a transaction, but VACUUM FULL cannot (even though it does a\n> CLUSTER behind the scenes). VACUUM FULL can process multiple relations,\n> whereas CLUSTER can't, but it seems nice to allow vacuum full for the\n> case of a single relation.\n>\n> I haven't checked the rest of the patch, but +1 for allowing VACUUM FULL\n> within a user txn.\n\nMy intention was to prevent that. I am certainly quite uneasy about\nchanging anything related to CLUSTER/VF, since they are old, complex\nand bug prone.\n\nSo for now, I will block VF, as was my original intent.\n\nI will be guided by what others think... so you may yet get your wish.\n\n\n> Maybe the error message needs to be qualified \"...when multiple\n> relations are specified\".\n>\n> ERROR: VACUUM cannot run inside a transaction block\n\nHmm, that is standard wording based on the statement type, but I can\nset a CONTEXT message also. Will update accordingly.\n\nThanks for your input.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 1 Nov 2022 23:56:17 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 23:56, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> > I haven't checked the rest of the patch, but +1 for allowing VACUUM FULL\n> > within a user txn.\n>\n> My intention was to prevent that. I am certainly quite uneasy about\n> changing anything related to CLUSTER/VF, since they are old, complex\n> and bug prone.\n>\n> So for now, I will block VF, as was my original intent.\n>\n> I will be guided by what others think... so you may yet get your wish.\n>\n>\n> > Maybe the error message needs to be qualified \"...when multiple\n> > relations are specified\".\n> >\n> > ERROR: VACUUM cannot run inside a transaction block\n>\n> Hmm, that is standard wording based on the statement type, but I can\n> set a CONTEXT message also. Will update accordingly.\n>\n> Thanks for your input.\n\nNew version attached, as described.\n\nOther review comments and alternate opinions welcome.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 3 Nov 2022 10:23:27 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Hi Simon,\n\nOn Thu, Nov 3, 2022 at 3:53 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Tue, 1 Nov 2022 at 23:56, Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n>\n> > > I haven't checked the rest of the patch, but +1 for allowing VACUUM\n> FULL\n> > > within a user txn.\n> >\n> > My intention was to prevent that. I am certainly quite uneasy about\n> > changing anything related to CLUSTER/VF, since they are old, complex\n> > and bug prone.\n> >\n> > So for now, I will block VF, as was my original intent.\n> >\n> > I will be guided by what others think... so you may yet get your wish.\n> >\n> >\n> > > Maybe the error message needs to be qualified \"...when multiple\n> > > relations are specified\".\n> > >\n> > > ERROR: VACUUM cannot run inside a transaction block\n> >\n> > Hmm, that is standard wording based on the statement type, but I can\n> > set a CONTEXT message also. Will update accordingly.\n> >\n> > Thanks for your input.\n>\n> New version attached, as described.\n>\n> Other review comments and alternate opinions welcome.\n>\n>\nI applied and did some basic testing on the patch, it works as described.\n\nI would like to bring up a few points that I came across while looking into\nthe vacuum code.\n\n1. As a result of this change to allow VACUUM inside a user transaction, I\nthink there is some chance of causing\na block/delay of concurrent VACUUMs if a VACUUM is being run under a long\nrunning transaction.\n2. Also, if a user runs VACUUM in a transaction, performance optimizations\nlike PROC_IN_VACUUM won't work.\n3. Also, if VACUUM happens towards the end of a long running transaction,\nthe snapshot will be old\nand xmin horizon for vacuum would be somewhat old as compared to current\nlazy vacuum which\nacquires a new snapshot just before scanning the table.\n\nSo, while I understand the need of the feature, I am wondering if there\nshould be some mention\nof above caveats in documentation with the recommendation that VACUUM\nshould be run outside\na transaction, in general.\n\nThank you,\nRahila Syed\n\nHi Simon,On Thu, Nov 3, 2022 at 3:53 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Tue, 1 Nov 2022 at 23:56, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> > I haven't checked the rest of the patch, but +1 for allowing VACUUM FULL\n> > within a user txn.\n>\n> My intention was to prevent that. I am certainly quite uneasy about\n> changing anything related to CLUSTER/VF, since they are old, complex\n> and bug prone.\n>\n> So for now, I will block VF, as was my original intent.\n>\n> I will be guided by what others think... so you may yet get your wish.\n>\n>\n> > Maybe the error message needs to be qualified \"...when multiple\n> > relations are specified\".\n> >\n> > ERROR: VACUUM cannot run inside a transaction block\n>\n> Hmm, that is standard wording based on the statement type, but I can\n> set a CONTEXT message also. Will update accordingly.\n>\n> Thanks for your input.\n\nNew version attached, as described.\n\nOther review comments and alternate opinions welcome.\nI applied and did some basic testing on the patch, it works as described. I would like to bring up a few points that I came across while looking into the vacuum code.1. As a result of this change to allow VACUUM inside a user transaction, I think there is some chance of causing a block/delay of concurrent VACUUMs if a VACUUM is being run under a long running transaction. 2. Also, if a user runs VACUUM in a transaction, performance optimizations like PROC_IN_VACUUM won't work.3. Also, if VACUUM happens towards the end of a long running transaction, the snapshot will be old and xmin horizon for vacuum would be somewhat old as compared to current lazy vacuum which acquires a new snapshot just before scanning the table.So, while I understand the need of the feature, I am wondering if there should be some mention of above caveats in documentation with the recommendation that VACUUM should be run outsidea transaction, in general.Thank you,Rahila Syed",
"msg_date": "Fri, 4 Nov 2022 10:15:44 +0530",
"msg_from": "Rahila Syed <rahilasyed90@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Hi Simon,\n\nOn Fri, Nov 4, 2022 at 10:15 AM Rahila Syed <rahilasyed90@gmail.com> wrote:\n\n> Hi Simon,\n>\n> On Thu, Nov 3, 2022 at 3:53 PM Simon Riggs <simon.riggs@enterprisedb.com>\n> wrote:\n>\n>> On Tue, 1 Nov 2022 at 23:56, Simon Riggs <simon.riggs@enterprisedb.com>\n>> wrote:\n>>\n>> > > I haven't checked the rest of the patch, but +1 for allowing VACUUM\n>> FULL\n>> > > within a user txn.\n>> >\n>> > My intention was to prevent that. I am certainly quite uneasy about\n>> > changing anything related to CLUSTER/VF, since they are old, complex\n>> > and bug prone.\n>> >\n>> > So for now, I will block VF, as was my original intent.\n>> >\n>> > I will be guided by what others think... so you may yet get your wish.\n>> >\n>> >\n>> > > Maybe the error message needs to be qualified \"...when multiple\n>> > > relations are specified\".\n>> > >\n>> > > ERROR: VACUUM cannot run inside a transaction block\n>> >\n>> > Hmm, that is standard wording based on the statement type, but I can\n>> > set a CONTEXT message also. Will update accordingly.\n>> >\n>> > Thanks for your input.\n>>\n>> New version attached, as described.\n>>\n>> Other review comments and alternate opinions welcome.\n>>\n>>\n> I applied and did some basic testing on the patch, it works as described.\n>\n> I would like to bring up a few points that I came across while looking\n> into the vacuum code.\n>\n> 1. As a result of this change to allow VACUUM inside a user transaction,\n> I think there is some chance of causing\n> a block/delay of concurrent VACUUMs if a VACUUM is being run under a long\n> running transaction.\n> 2. Also, if a user runs VACUUM in a transaction, performance optimizations\n> like PROC_IN_VACUUM won't work.\n> 3. Also, if VACUUM happens towards the end of a long running transaction,\n> the snapshot will be old\n> and xmin horizon for vacuum would be somewhat old as compared to current\n> lazy vacuum which\n> acquires a new snapshot just before scanning the table.\n>\n> So, while I understand the need of the feature, I am wondering if there\n> should be some mention\n> of above caveats in documentation with the recommendation that VACUUM\n> should be run outside\n> a transaction, in general.\n>\n>\nSorry, I just noticed that you have already mentioned some of these in the\ndocumentation as follows, so it seems\nit is already taken care of.\n\n+ <command>VACUUM</command> cannot be executed inside a transaction\nblock,\n+ unless a single table is specified and <literal>FULL</literal> is not\n+ specified. When executing inside a transaction block the vacuum scan\ncan\n+ hold back the xmin horizon and does not update the database\ndatfrozenxid,\n+ as a result this usage is not useful for database maintenance, but is\nprovided\n+ to allow vacuuming in special circumstances, such as temporary or\nprivate\n+ work tables.\n\nThank you,\nRahila Syed\n\nHi Simon,On Fri, Nov 4, 2022 at 10:15 AM Rahila Syed <rahilasyed90@gmail.com> wrote:Hi Simon,On Thu, Nov 3, 2022 at 3:53 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Tue, 1 Nov 2022 at 23:56, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> > I haven't checked the rest of the patch, but +1 for allowing VACUUM FULL\n> > within a user txn.\n>\n> My intention was to prevent that. I am certainly quite uneasy about\n> changing anything related to CLUSTER/VF, since they are old, complex\n> and bug prone.\n>\n> So for now, I will block VF, as was my original intent.\n>\n> I will be guided by what others think... so you may yet get your wish.\n>\n>\n> > Maybe the error message needs to be qualified \"...when multiple\n> > relations are specified\".\n> >\n> > ERROR: VACUUM cannot run inside a transaction block\n>\n> Hmm, that is standard wording based on the statement type, but I can\n> set a CONTEXT message also. Will update accordingly.\n>\n> Thanks for your input.\n\nNew version attached, as described.\n\nOther review comments and alternate opinions welcome.\nI applied and did some basic testing on the patch, it works as described. I would like to bring up a few points that I came across while looking into the vacuum code.1. As a result of this change to allow VACUUM inside a user transaction, I think there is some chance of causing a block/delay of concurrent VACUUMs if a VACUUM is being run under a long running transaction. 2. Also, if a user runs VACUUM in a transaction, performance optimizations like PROC_IN_VACUUM won't work.3. Also, if VACUUM happens towards the end of a long running transaction, the snapshot will be old and xmin horizon for vacuum would be somewhat old as compared to current lazy vacuum which acquires a new snapshot just before scanning the table.So, while I understand the need of the feature, I am wondering if there should be some mention of above caveats in documentation with the recommendation that VACUUM should be run outsidea transaction, in general.Sorry, I just noticed that you have already mentioned some of these in the documentation as follows, so it seemsit is already taken care of.+ <command>VACUUM</command> cannot be executed inside a transaction block,+ unless a single table is specified and <literal>FULL</literal> is not+ specified. When executing inside a transaction block the vacuum scan can+ hold back the xmin horizon and does not update the database datfrozenxid,+ as a result this usage is not useful for database maintenance, but is provided+ to allow vacuuming in special circumstances, such as temporary or private+ work tables.Thank you,Rahila Syed",
"msg_date": "Fri, 4 Nov 2022 13:06:54 +0530",
"msg_from": "Rahila Syed <rahilasyed90@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Hi Rahila,\n\nThanks for your review.\n\nOn Fri, 4 Nov 2022 at 07:37, Rahila Syed <rahilasyed90@gmail.com> wrote:\n\n>> I would like to bring up a few points that I came across while looking into the vacuum code.\n>>\n>> 1. As a result of this change to allow VACUUM inside a user transaction, I think there is some chance of causing\n>> a block/delay of concurrent VACUUMs if a VACUUM is being run under a long running transaction.\n>> 2. Also, if a user runs VACUUM in a transaction, performance optimizations like PROC_IN_VACUUM won't work.\n>> 3. Also, if VACUUM happens towards the end of a long running transaction, the snapshot will be old\n>> and xmin horizon for vacuum would be somewhat old as compared to current lazy vacuum which\n>> acquires a new snapshot just before scanning the table.\n>>\n>> So, while I understand the need of the feature, I am wondering if there should be some mention\n>> of above caveats in documentation with the recommendation that VACUUM should be run outside\n>> a transaction, in general.\n>>\n>\n> Sorry, I just noticed that you have already mentioned some of these in the documentation as follows, so it seems\n> it is already taken care of.\n>\n> + <command>VACUUM</command> cannot be executed inside a transaction block,\n> + unless a single table is specified and <literal>FULL</literal> is not\n> + specified. When executing inside a transaction block the vacuum scan can\n> + hold back the xmin horizon and does not update the database datfrozenxid,\n> + as a result this usage is not useful for database maintenance, but is provided\n> + to allow vacuuming in special circumstances, such as temporary or private\n> + work tables.\n\nYes, I wondered whether we should have a NOTICE or WARNING to remind\npeople of those points?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:09:34 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Hi,\n\nOn Fri, Nov 4, 2022 at 2:39 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> Hi Rahila,\n>\n> Thanks for your review.\n>\n> On Fri, 4 Nov 2022 at 07:37, Rahila Syed <rahilasyed90@gmail.com> wrote:\n>\n> >> I would like to bring up a few points that I came across while looking\n> into the vacuum code.\n> >>\n> >> 1. As a result of this change to allow VACUUM inside a user\n> transaction, I think there is some chance of causing\n> >> a block/delay of concurrent VACUUMs if a VACUUM is being run under a\n> long running transaction.\n> >> 2. Also, if a user runs VACUUM in a transaction, performance\n> optimizations like PROC_IN_VACUUM won't work.\n> >> 3. Also, if VACUUM happens towards the end of a long running\n> transaction, the snapshot will be old\n> >> and xmin horizon for vacuum would be somewhat old as compared to\n> current lazy vacuum which\n> >> acquires a new snapshot just before scanning the table.\n> >>\n> >> So, while I understand the need of the feature, I am wondering if there\n> should be some mention\n> >> of above caveats in documentation with the recommendation that VACUUM\n> should be run outside\n> >> a transaction, in general.\n> >>\n> >\n> > Sorry, I just noticed that you have already mentioned some of these in\n> the documentation as follows, so it seems\n> > it is already taken care of.\n> >\n> > + <command>VACUUM</command> cannot be executed inside a transaction\n> block,\n> > + unless a single table is specified and <literal>FULL</literal> is\n> not\n> > + specified. When executing inside a transaction block the vacuum\n> scan can\n> > + hold back the xmin horizon and does not update the database\n> datfrozenxid,\n> > + as a result this usage is not useful for database maintenance, but\n> is provided\n> > + to allow vacuuming in special circumstances, such as temporary or\n> private\n> > + work tables.\n>\n> Yes, I wondered whether we should have a NOTICE or WARNING to remind\n> people of those points?\n>\n\n +1 . My vote for NOTICE over WARNING because I think\nit is useful information for the user rather than any potential problem.\n\nThank you,\nRahila Syed\n\nHi,On Fri, Nov 4, 2022 at 2:39 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:Hi Rahila,\n\nThanks for your review.\n\nOn Fri, 4 Nov 2022 at 07:37, Rahila Syed <rahilasyed90@gmail.com> wrote:\n\n>> I would like to bring up a few points that I came across while looking into the vacuum code.\n>>\n>> 1. As a result of this change to allow VACUUM inside a user transaction, I think there is some chance of causing\n>> a block/delay of concurrent VACUUMs if a VACUUM is being run under a long running transaction.\n>> 2. Also, if a user runs VACUUM in a transaction, performance optimizations like PROC_IN_VACUUM won't work.\n>> 3. Also, if VACUUM happens towards the end of a long running transaction, the snapshot will be old\n>> and xmin horizon for vacuum would be somewhat old as compared to current lazy vacuum which\n>> acquires a new snapshot just before scanning the table.\n>>\n>> So, while I understand the need of the feature, I am wondering if there should be some mention\n>> of above caveats in documentation with the recommendation that VACUUM should be run outside\n>> a transaction, in general.\n>>\n>\n> Sorry, I just noticed that you have already mentioned some of these in the documentation as follows, so it seems\n> it is already taken care of.\n>\n> + <command>VACUUM</command> cannot be executed inside a transaction block,\n> + unless a single table is specified and <literal>FULL</literal> is not\n> + specified. When executing inside a transaction block the vacuum scan can\n> + hold back the xmin horizon and does not update the database datfrozenxid,\n> + as a result this usage is not useful for database maintenance, but is provided\n> + to allow vacuuming in special circumstances, such as temporary or private\n> + work tables.\n\nYes, I wondered whether we should have a NOTICE or WARNING to remind\npeople of those points? +1 . My vote for NOTICE over WARNING because I thinkit is useful information for the user rather than any potential problem.Thank you,Rahila Syed",
"msg_date": "Fri, 4 Nov 2022 17:18:08 +0530",
"msg_from": "Rahila Syed <rahilasyed90@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 2:31 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Fix, so that this works without issue:\n>\n> BEGIN;\n> ....\n> VACUUM (ANALYZE) vactst;\n> ....\n> COMMIT;\n>\n> Allows both ANALYZE and vacuum of toast tables, but not VACUUM FULL.\n>\n> When in a xact block, we do not set PROC_IN_VACUUM,\n> nor update datfrozenxid.\n\nIt doesn't seem like a good idea to add various new special cases to\nVACUUM just to make scripts like this work. I'm pretty sure that there\nare several deep, subtle reasons why VACUUM cannot be assumed safe to\nrun in a user transaction.\n\nFor example, the whole way that index page deletion is decoupled from\nrecycling in access methods like nbtree (see \"Placing deleted pages in\nthe FSM\" from the nbtree README) rests upon delicate assumptions about\nwhether or not there could be an \"in-flight\" B-tree descent that is\nat risk of landing on a deleted page as it is concurrently recycled.\nIn general the deleted page has to remain in place as a tombstone,\nuntil that is definitely not possible anymore. This relies on the backend\nthat runs VACUUM having no references to the page pending deletion.\n(Commit 9dd963ae25 added an optimization that heavily leaned on the\nidea that the state within the backend running VACUUM couldn't\npossibly have a live page reference that is at risk of being broken by\nthe optimization, though I'm pretty sure that you'd have problems even\nwithout that commit/optimization in place.)\n\nMy guess is that there are more things like that. Possibly even things\nthat were never directly considered. VACUUM evolved in a world where\nwe absolutely took not running in a transaction for granted. Changing\nthat now is a pretty big deal. Maybe it would all be worth it if the end\nresult was a super compelling feature. But I for one don't think that\nit will be.\n\nIf we absolutely have to do this, then the least worst approach might\nbe to make VACUUM into a no-op rather than throwing an ERROR -- demote\nthe ERROR into a WARNING. You could argue that we're just arbitrarily\ndeciding to not do a VACUUM just to be able to avoid throwing an error\nif we do that. But isn't that already true with the patch that we\nhave? Is it really a \"true VACUUM\" if the operation can never advance\ndatfrozenxid? At least selectively demoting the ERROR to a WARNING is\n\"transparent\" about it.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 6 Nov 2022 10:50:24 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> My guess is that there are more things like that. Possibly even things\n> that were never directly considered. VACUUM evolved in a world where\n> we absolutely took not running in a transaction for granted. Changing\n> that now is a pretty big deal. Maybe it would all be worth it if the end\n> result was a super compelling feature. But I for one don't think that\n> it will be.\n\nYeah. To be blunt, this proposal scares the sh*t out of me. I agree\nthat there are tons of subtle dependencies on our current assumptions\nabout how VACUUM works, and I strongly suspect that we won't find all of\nthem until after users have lost data. I cannot believe that running\nVACUUM inside transactions is valuable enough to take that risk ...\nespecially if it's a modified limited kind of VACUUM that doesn't\neliminate the need for periodic real VACUUMs.\n\nIn general, I do not believe in encouraging users to run VACUUM\nmanually in the first place. We would be far better served by\nspending our effort to improve autovacuum's shortcomings. I'd\nlike to see some sort of direct attack on its inability to deal\nwith temp tables, for instance. (Force the owning backend to\ndo it? Temporarily change the access rules so that the data\nmoves to shared buffers? Dunno, but we sure haven't tried hard.)\nHowever many bugs such a thing might have initially, at least\nthey'd only put temporary data at hazard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Nov 2022 14:14:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Sun, Nov 6, 2022 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In general, I do not believe in encouraging users to run VACUUM\n> manually in the first place. We would be far better served by\n> spending our effort to improve autovacuum's shortcomings.\n\nI couldn't agree more. A lot of problems seem related to the idea that\nVACUUM is just a command that the DBA periodically runs to get a\npredictable fixed result, a little like CREATE INDEX. That conceptual\nmodel isn't exactly wrong; it just makes it much harder to apply any\nkind of context about the needs of the table over time. There is a\nnatural cycle to how VACUUM (really autovacuum) is run, and the\ndetails matter.\n\nThere is a significant amount of relevant context that we can't really\nuse right now. That wouldn't be true if VACUUM only ran within an\nautovacuum worker (by definition). The VACUUM command itself would\nstill be available, and support the same user interface, more or less.\nUnder the hood the VACUUM command would work by enqueueing a VACUUM\njob, to be performed asynchronously by an autovacuum worker. Perhaps\nthe initial enqueue operation could be transactional, fixing Simon's complaint.\n\n\"No more VACUUMs outside of autovacuum\" would enable more advanced\nautovacuum.c scheduling, allowing us to apply a lot more context about\nthe costs and benefits, without having to treat manual VACUUM as an\nindependent thing. We could coalesce together redundant VACUUM jobs,\nsuspend and resume VACUUM operations, and have more strategies to deal\nwith problems as they emerge.\n\n> I'd like to see some sort of direct attack on its inability to deal\n> with temp tables, for instance. (Force the owning backend to\n> do it? Temporarily change the access rules so that the data\n> moves to shared buffers? Dunno, but we sure haven't tried hard.)\n\nThis is a good example of the kind of thing I have in mind. Perhaps it\ncould work by killing the backend that owns the temp relation when\nthings truly get out of hand? I think that that would be a perfectly\nreasonable trade-off.\n\nAnother related idea: better behavior in the event of a manually\nissued VACUUM (now just an enqueued autovacuum) that cannot do useful\nwork due to the presence of a long running snapshot. The VACUUM\ndoesn't have to dutifully report \"success\" when there is no practical\nsense in which it was successful. There could be a back and forth\nconversation between autovacuum.c and vacuumlazy.c that makes sure\nthat something useful happens sooner or later. The passage of time\nreally matters here.\n\nAs a bonus, we might be able to get rid of the autovacuum GUC\nvariants. Plus the current autovacuum logging would just be how we'd\nlog every VACUUM.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 6 Nov 2022 12:40:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Sun, 6 Nov 2022 at 18:50, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Oct 27, 2022 at 2:31 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > Fix, so that this works without issue:\n> >\n> > BEGIN;\n> > ....\n> > VACUUM (ANALYZE) vactst;\n> > ....\n> > COMMIT;\n> >\n> > Allows both ANALYZE and vacuum of toast tables, but not VACUUM FULL.\n> >\n> > When in a xact block, we do not set PROC_IN_VACUUM,\n> > nor update datfrozenxid.\n>\n> It doesn't seem like a good idea to add various new special cases to\n> VACUUM just to make scripts like this work.\n\nUsability is a major concern that doesn't get a high enough priority.\n\n> I'm pretty sure that there\n> are several deep, subtle reasons why VACUUM cannot be assumed safe to\n> run in a user transaction.\n\nI expected there were, so it's good to discuss them. Thanks for the input.\n\n> If we absolutely have to do this, then the least worst approach might\n> be to make VACUUM into a no-op rather than throwing an ERROR -- demote\n> the ERROR into a WARNING. You could argue that we're just arbitrarily\n> deciding to not do a VACUUM just to be able to avoid throwing an error\n> if we do that. But isn't that already true with the patch that we\n> have? Is it really a \"true VACUUM\" if the operation can never advance\n> datfrozenxid? At least selectively demoting the ERROR to a WARNING is\n> \"transparent\" about it.\n\nI'll answer that part in my reply to Tom, since there are good ideas in both.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 7 Nov 2022 07:58:13 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Sun, 6 Nov 2022 at 20:40, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Nov 6, 2022 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > In general, I do not believe in encouraging users to run VACUUM\n> > manually in the first place. We would be far better served by\n> > spending our effort to improve autovacuum's shortcomings.\n>\n> I couldn't agree more. A lot of problems seem related to the idea that\n> VACUUM is just a command that the DBA periodically runs to get a\n> predictable fixed result, a little like CREATE INDEX. That conceptual\n> model isn't exactly wrong; it just makes it much harder to apply any\n> kind of context about the needs of the table over time. There is a\n> natural cycle to how VACUUM (really autovacuum) is run, and the\n> details matter.\n>\n> There is a significant amount of relevant context that we can't really\n> use right now. That wouldn't be true if VACUUM only ran within an\n> autovacuum worker (by definition). The VACUUM command itself would\n> still be available, and support the same user interface, more or less.\n> Under the hood the VACUUM command would work by enqueueing a VACUUM\n> job, to be performed asynchronously by an autovacuum worker. Perhaps\n> the initial enqueue operation could be transactional, fixing Simon's complaint.\n\nAh, I see you got to this idea first!\n\nYes, what we need is for the \"VACUUM command\" to not fail in a script.\nNot sure anyone cares where the work takes place.\n\nEnqueuing a request for autovacuum to do that work, then blocking\nuntil it is complete would do the job.\n\n> \"No more VACUUMs outside of autovacuum\" would enable more advanced\n> autovacuum.c scheduling, allowing us to apply a lot more context about\n> the costs and benefits, without having to treat manual VACUUM as an\n> independent thing. We could coalesce together redundant VACUUM jobs,\n> suspend and resume VACUUM operations, and have more strategies to deal\n> with problems as they emerge.\n\n+1, but clearly this would not make temp table VACUUMs work.\n\n> > I'd like to see some sort of direct attack on its inability to deal\n> > with temp tables, for instance. (Force the owning backend to\n> > do it? Temporarily change the access rules so that the data\n> > moves to shared buffers? Dunno, but we sure haven't tried hard.)\n\nThis was a $DIRECT attack on making temp tables work! ;-)\n\nTemp tables are actually easier, since we don't need any of the\nconcurrency features we get with lazy vacuum. So the answer is to\nalways run a VACUUM FULL on temp tables since this skips any issues\nwith indexes etc..\n\nWe would need to check a few things first.... maybe something like\nthis (mostly borrowed heavily from COPY)\n\n InvalidateCatalogSnapshot();\n if (!ThereAreNoPriorRegisteredSnapshots() || !ThereAreNoReadyPortals())\n ereport(WARNING,\n (errcode(ERRCODE_INVALID_TRANSACTION_STATE),\n errmsg(\"vacuum of temporary table ignored because\nof prior transaction activity\")));\n CheckTableNotInUse(rel, \"VACUUM\");\n\n> This is a good example of the kind of thing I have in mind. Perhaps it\n> could work by killing the backend that owns the temp relation when\n> things truly get out of hand? I think that that would be a perfectly\n> reasonable trade-off.\n\n+1\n\n> Another related idea: better behavior in the event of a manually\n> issued VACUUM (now just an enqueued autovacuum) that cannot do useful\n> work due to the presence of a long running snapshot. The VACUUM\n> doesn't have to dutifully report \"success\" when there is no practical\n> sense in which it was successful. There could be a back and forth\n> conversation between autovacuum.c and vacuumlazy.c that makes sure\n> that something useful happens sooner or later. The passage of time\n> really matters here.\n\nRegrettably, neither vacuum nor autovacuum waits for xmin to change;\nperhaps it should.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 7 Nov 2022 08:20:32 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 12:20 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> > Another related idea: better behavior in the event of a manually\n> > issued VACUUM (now just an enqueued autovacuum) that cannot do useful\n> > work due to the presence of a long running snapshot. The VACUUM\n> > doesn't have to dutifully report \"success\" when there is no practical\n> > sense in which it was successful. There could be a back and forth\n> > conversation between autovacuum.c and vacuumlazy.c that makes sure\n> > that something useful happens sooner or later. The passage of time\n> > really matters here.\n>\n> Regrettably, neither vacuum nor autovacuum waits for xmin to change;\n> perhaps it should.\n\nYes, it's very primitive right now. In fact I recently discovered that\njust using the reloption version (not the GUC version) of\nautovacuum_freeze_max_age in a totally straightforward way is all it\ntakes to utterly confuse autovacuum.c:\n\nhttps://postgr.es/m/CAH2-Wz=DJAokY_GhKJchgpa8k9t_H_OVOvfPEn97jGNr9W=deg@mail.gmail.com\n\nIt's easy to convince autovacuum.c to launch antiwraparound\nautovacuums that reliably have no chance of advancing relfrozenxid to\na degree that satisfies autovacuum.c. It will launch antiwraparound\nautovacuums again and again, never realizing that VACUUM doesn't\nreally care about what it expects (at least not with the reloption in\nuse). Clearly that's just broken. It also suggests a more general\ndesign problem, at least in my mind.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 7 Nov 2022 16:34:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Mon, 7 Nov 2022 at 08:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> Temp tables are actually easier, since we don't need any of the\n> concurrency features we get with lazy vacuum. So the answer is to\n> always run a VACUUM FULL on temp tables since this skips any issues\n> with indexes etc..\n\nSo I see 3 options for what to do next\n\n1. Force the FULL option for all tables, when executed in a\ntransaction block. This gets round the reasonable objections to\nrunning a concurrent vacuum in a shared xact block. As Justin points\nout, CLUSTER is already supported, which uses the same code.\n\n2. Force the FULL option for temp tables, when executed in a\ntransaction block. In a later patch, queue up an autovacuum run for\nregular tables.\n\n3. Return with feedback this patch. (But then what happens with temp tables?)\n\nThoughts?\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 8 Nov 2022 03:10:03 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Tue, 8 Nov 2022 at 03:10, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, 7 Nov 2022 at 08:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > Temp tables are actually easier, since we don't need any of the\n> > concurrency features we get with lazy vacuum.\n\n> Thoughts?\n\nNew patch, which does this, when in a xact block\n\n1. For temp tables, only VACUUM FULL is allowed\n2. For persistent tables, an AV task is created to perform the vacuum,\nwhich eventually performs a vacuum\n\nThe patch works, but there are various aspects of the design that need\ninput. Thoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 14 Nov 2022 19:52:04 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Mon, 14 Nov 2022 at 19:52, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 8 Nov 2022 at 03:10, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Mon, 7 Nov 2022 at 08:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > > Temp tables are actually easier, since we don't need any of the\n> > > concurrency features we get with lazy vacuum.\n>\n> > Thoughts?\n>\n> New patch, which does this, when in a xact block\n>\n> 1. For temp tables, only VACUUM FULL is allowed\n> 2. For persistent tables, an AV task is created to perform the vacuum,\n> which eventually performs a vacuum\n>\n> The patch works, but there are various aspects of the design that need\n> input. Thoughts?\n\nNew version.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 15 Nov 2022 09:13:36 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "I think the idea of being able to request an autovacuum worker for a\nspecific table is actually very good. I think it's what most users\nactually want when they are running vacuum. In fact in previous jobs\npeople have built infrastructure that basically duplicates autovacuum\njust so they could do this.\n\nHowever I'm not a fan of commands that sometimes do one thing and\nsometimes magically do something very different. I don't like the idea\nthat the same vacuum command would sometimes run in-process and\nsometimes do this out of process request. And the rules for when it\ndoes which are fairly complex to explain -- it runs in process unless\nyou're in a transaction when it runs out of process unless it's a\ntemporary table ...\n\nI think this requesting autovacuum worker should be a distinct\ncommand. Or at least an explicit option to vacuum.\n\nAlso, I was confused reading the thread above about mention of VACUUM\nFULL. I don't understand why it's relevant at all. We certainly can't\nforce VACUUM FULL when it wasn't requested on potentially large\ntables.\n\n\n",
"msg_date": "Wed, 16 Nov 2022 17:14:07 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> I think this requesting autovacuum worker should be a distinct\n> command. Or at least an explicit option to vacuum.\n\n+1. That'd reduce confusion, and perhaps we could remove some\nof the restrictions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 16 Nov 2022 17:26:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 05:14:07PM -0500, Greg Stark wrote:\n> I think this requesting autovacuum worker should be a distinct\n> command. Or at least an explicit option to vacuum.\n\n+1. I was going to suggest VACUUM (NOWAIT) ..\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 17 Nov 2022 14:00:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 5:14 PM Greg Stark <stark@mit.edu> wrote:\n> However I'm not a fan of commands that sometimes do one thing and\n> sometimes magically do something very different. I don't like the idea\n> that the same vacuum command would sometimes run in-process and\n> sometimes do this out of process request. And the rules for when it\n> does which are fairly complex to explain -- it runs in process unless\n> you're in a transaction when it runs out of process unless it's a\n> temporary table ...\n\n100% agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Nov 2022 15:06:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 20:00, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 05:14:07PM -0500, Greg Stark wrote:\n> > I think this requesting autovacuum worker should be a distinct\n> > command. Or at least an explicit option to vacuum.\n>\n> +1. I was going to suggest VACUUM (NOWAIT) ..\n\nYes, I have no problem with an explicit command.\n\nAt the moment the patch runs VACUUM in the background in an autovacuum\nprocess, but the call is asynchronous, since we do not wait for the\ncommand to finish (or even start).\n\nSo the command names I was thinking of would be one of these:\n\nVACUUM (BACKGROUND) or VACUUM (AUTOVACUUM) - which might be clearer\nor\nVACUUM (ASYNC) - which is more descriptive of the behavior\n\nor we could go for both\nVACUUM (BACKGROUND, ASYNC) - since this allows us to have a\nBACKGROUND, SYNC version in the future\n\nThoughts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 18 Nov 2022 11:54:25 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Thu, 17 Nov 2022 at 20:06, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Nov 16, 2022 at 5:14 PM Greg Stark <stark@mit.edu> wrote:\n> > However I'm not a fan of commands that sometimes do one thing and\n> > sometimes magically do something very different. I don't like the idea\n> > that the same vacuum command would sometimes run in-process and\n> > sometimes do this out of process request. And the rules for when it\n> > does which are fairly complex to explain -- it runs in process unless\n> > you're in a transaction when it runs out of process unless it's a\n> > temporary table ...\n>\n> 100% agree.\n\nI agree as well.\n\nAt the moment, the problem (OT) is that VACUUM behaves inconsistently\n\nOutside a transaction - works perfectly\nIn a transaction - throws ERROR, which prevents a whole script from\nexecuting correctly\n\nWhat we are trying to do is avoid the ERROR. I don't want them to\nbehave like this, but that's the only option possible to avoid ERROR.\n\nSo if consistency is also a strong requirement, then maybe we should\nmake that new command the default, i.e. make VACUUM always just a\nrequest to vacuum in background. That way it will be consistent.\n\nCan we at least have a vacuum_runs_in_background = on | off, to allow\nusers to take advantage of this WITHOUT needing to rewrite all of\ntheir scripts?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 18 Nov 2022 12:04:02 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 7:04 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Outside a transaction - works perfectly\n> In a transaction - throws ERROR, which prevents a whole script from\n> executing correctly\n\nRight, but your proposal would move that inconsistency to a different\nplace. It wouldn't eliminate it. I don't think we can pretend that\nnobody will notice their operation being moved to the background. For\ninstance, there might not be an available background worker for a long\ntime, which could mean that some vacuums work right away and others\njust sit there for reasons that aren't obvious to the user.\n\n> So if consistency is also a strong requirement, then maybe we should\n> make that new command the default, i.e. make VACUUM always just a\n> request to vacuum in background. That way it will be consistent.\n\nSince one fairly common reason for running vacuum in the foreground is\nneeding to vacuum a table when all autovacuum workers are busy, or\nwhen they are vacuuming it with a cost limit and it needs to get done\nsooner, I think this would surprise a lot of users in a negative way.\n\n> Can we at least have a vacuum_runs_in_background = on | off, to allow\n> users to take advantage of this WITHOUT needing to rewrite all of\n> their scripts?\n\nI'm not entirely convinced that's a good idea, but happy to hear what\nothers think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Nov 2022 11:59:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Nov 18, 2022 at 7:04 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n>> So if consistency is also a strong requirement, then maybe we should\n>> make that new command the default, i.e. make VACUUM always just a\n>> request to vacuum in background. That way it will be consistent.\n\n> Since one fairly common reason for running vacuum in the foreground is\n> needing to vacuum a table when all autovacuum workers are busy, or\n> when they are vacuuming it with a cost limit and it needs to get done\n> sooner, I think this would surprise a lot of users in a negative way.\n\nIt would also break a bunch of our regression tests, which expect a\nVACUUM to complete immediately.\n\n>> Can we at least have a vacuum_runs_in_background = on | off, to allow\n>> users to take advantage of this WITHOUT needing to rewrite all of\n>> their scripts?\n\n> I'm not entirely convinced that's a good idea, but happy to hear what\n> others think.\n\nI think the answer to that one is flat no. We learned long ago that GUCs\nwith significant semantic impact on queries are a bad idea. For example,\nif a user issues VACUUM expecting behavior A and she gets behavior B\nbecause somebody changed the postgresql.conf entry, she won't be happy.\n\nBasically, I am not buying Simon's requirement that this be transparent.\nI think the downsides would completely outweigh whatever upside there\nmay be (and given the shortage of prior complaints, I don't think the\nupside is very large).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Nov 2022 13:26:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Fri, 18 Nov 2022 at 18:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Nov 18, 2022 at 7:04 AM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> >> So if consistency is also a strong requirement, then maybe we should\n> >> make that new command the default, i.e. make VACUUM always just a\n> >> request to vacuum in background. That way it will be consistent.\n>\n> > Since one fairly common reason for running vacuum in the foreground is\n> > needing to vacuum a table when all autovacuum workers are busy, or\n> > when they are vacuuming it with a cost limit and it needs to get done\n> > sooner, I think this would surprise a lot of users in a negative way.\n>\n> It would also break a bunch of our regression tests, which expect a\n> VACUUM to complete immediately.\n>\n> >> Can we at least have a vacuum_runs_in_background = on | off, to allow\n> >> users to take advantage of this WITHOUT needing to rewrite all of\n> >> their scripts?\n>\n> > I'm not entirely convinced that's a good idea, but happy to hear what\n> > others think.\n>\n> I think the answer to that one is flat no. We learned long ago that GUCs\n> with significant semantic impact on queries are a bad idea. For example,\n> if a user issues VACUUM expecting behavior A and she gets behavior B\n> because somebody changed the postgresql.conf entry, she won't be happy.\n>\n> Basically, I am not buying Simon's requirement that this be transparent.\n> I think the downsides would completely outweigh whatever upside there\n> may be (and given the shortage of prior complaints, I don't think the\n> upside is very large).\n\nJust to say I'm happy with that decision and will switch to the\nrequest for a background vacuum.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 21 Nov 2022 13:36:41 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Fri, 18 Nov 2022 at 11:54, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 17 Nov 2022 at 20:00, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Wed, Nov 16, 2022 at 05:14:07PM -0500, Greg Stark wrote:\n> > > I think this requesting autovacuum worker should be a distinct\n> > > command. Or at least an explicit option to vacuum.\n> >\n> > +1. I was going to suggest VACUUM (NOWAIT) ..\n>\n> Yes, I have no problem with an explicit command.\n>\n> At the moment the patch runs VACUUM in the background in an autovacuum\n> process, but the call is asynchronous, since we do not wait for the\n> command to finish (or even start).\n>\n> So the command names I was thinking of would be one of these:\n>\n> VACUUM (BACKGROUND) or VACUUM (AUTOVACUUM) - which might be clearer\n> or\n> VACUUM (ASYNC) - which is more descriptive of the behavior\n>\n> or we could go for both\n> VACUUM (BACKGROUND, ASYNC) - since this allows us to have a\n> BACKGROUND, SYNC version in the future\n\n\nAttached patch implements VACUUM (BACKGROUND).\n\nThere are quite a few small details to consider; please read the docs\nand comments.\n\nThere is a noticeable delay before the background vacuum starts.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 21 Nov 2022 15:07:25 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 03:07:25PM +0000, Simon Riggs wrote:\n> Attached patch implements VACUUM (BACKGROUND).\n> \n> There are quite a few small details to consider; please read the docs\n> and comments.\n> \n> There is a noticeable delay before the background vacuum starts.\n\nYou disallowed some combinations of unsupported options, but not others,\nlike FULL, PARALLEL, etc. They should either be supported or\nprohibited.\n\n+ /* use default values */\n+ tab.at_params.log_min_duration = 0;\n\n0 isn't the default ?\n\nMaybe VERBOSE should mean to set min_duration=0, otherwise it should use\nthe default ?\n\nYou only handle one rel, but ExecVacuum() has a loop around rels.\n\n+NOTICE: autovacuum of \"vactst\" has been requested, using the options specified\n\n=> I don't think it's useful to say \"using the options specified\".\n\nShould autovacuum de-duplicate requests ?\nBRIN doesn't do that, but it's intended for append-only tables, so the\nissue doesn't really come up.\n\nCould add psql tab-completion.\n\nIs it going to be confusing that the session's GUC variables won't be\ntransmitted to autovacuum ? For example, the freeze and costing\nparameters.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 22 Nov 2022 10:43:25 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Tue, 22 Nov 2022 at 16:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Nov 21, 2022 at 03:07:25PM +0000, Simon Riggs wrote:\n> > Attached patch implements VACUUM (BACKGROUND).\n> >\n> > There are quite a few small details to consider; please read the docs\n> > and comments.\n> >\n> > There is a noticeable delay before the background vacuum starts.\n>\n> You disallowed some combinations of unsupported options, but not others,\n> like FULL, PARALLEL, etc. They should either be supported or\n> prohibited.\n>\n> + /* use default values */\n> + tab.at_params.log_min_duration = 0;\n>\n> 0 isn't the default ?\n>\n> Maybe VERBOSE should mean to set min_duration=0, otherwise it should use\n> the default ?\n\n+1\n\n> You only handle one rel, but ExecVacuum() has a loop around rels.\n>\n> +NOTICE: autovacuum of \"vactst\" has been requested, using the options specified\n>\n> => I don't think it's useful to say \"using the options specified\".\n>\n> Should autovacuum de-duplicate requests ?\n> BRIN doesn't do that, but it's intended for append-only tables, so the\n> issue doesn't really come up.\n\nEasy to do\n\n> Could add psql tab-completion.\n>\n> Is it going to be confusing that the session's GUC variables won't be\n> transmitted to autovacuum ? For example, the freeze and costing\n> parameters.\n\nI think we should start with the \"how do I want it to behave\" parts of\nthe above and leave spelling and tab completion as final items.\n\nOther questions are whether there should be a limit on number of\nbackground vacuums submitted at any time.\nWhether there should be a GUC that specifies the max number of queued tasks.\nDo we need a query that shows what items are queued?\netc\n\nJustin, if you wanted to take up the patch from here, I would be more\nthan happy. You have the knowledge and insight to make this work\nright.\n\nWe should probably start a new CF patch entry so we can return the\noriginal patch as rejected, then continue with this new idea\nseparately.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 22 Nov 2022 17:16:59 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
},
{
"msg_contents": "On Tue, Nov 22, 2022 at 05:16:59PM +0000, Simon Riggs wrote:\n> Justin, if you wanted to take up the patch from here, I would be more\n> than happy. You have the knowledge and insight to make this work\n> right.\n\nI have no particular use for this, so I wouldn't be a good person to\nfinish or shepherd the patch.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 23 Nov 2022 15:46:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow single table VACUUM in transaction block"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile trying to measure if there is any gain from the change as\ndiscussed in [1], I happened to notice another place that is slowed down\nby list_delete_first. I'm using the query as below:\n\n(n=1000000;\nprintf \"explain (summary on) select * from t where \"\nfor ((i=1;i<$n;i++)); do printf \"a = $i or \"; done;\nprintf \"a = $n;\"\n) | psql\n\nAnd I notice that a large part of planning time is spent on the\nlist_delete_first calls inside simplify_or_arguments().\n\nI think the issue here is clear and straightforward: list_delete_first\nhas an O(N) cost due to data movement. And I believe similar issue has\nbeen discussed several times before.\n\nI wonder if we can improve it by using list_delete_last instead, so I\ntried the following change:\n\n--- a/src/backend/optimizer/util/clauses.c\n+++ b/src/backend/optimizer/util/clauses.c\n@@ -3612,9 +3612,9 @@ simplify_or_arguments(List *args,\n unprocessed_args = list_copy(args);\n while (unprocessed_args)\n {\n- Node *arg = (Node *) linitial(unprocessed_args);\n+ Node *arg = (Node *) llast(unprocessed_args);\n\n- unprocessed_args = list_delete_first(unprocessed_args);\n+ unprocessed_args = list_delete_last(unprocessed_args);\n\n\nWith this change, in my box the planning time for the query above is\nreduced from 64257.784 ms to 1411.666 ms, a big improvement. The side\neffect is that it results in a lot of plan diffs in regression tests,\nbut they are all about different order of OR arguments.\n\nI believe simplify_and_arguments() can also benefit from similar\nchanges. But I'm not sure if we could have such a long AND/OR arguments\nin real world. So is this worth doing?\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4-RXhgz0i4O1z62gt%2BbTLTM5vXYyYhgnius0j_txLH7hg%40mail.gmail.com\n\nThanks\nRichard\n\nHi hackers,While trying to measure if there is any gain from the change asdiscussed in [1], I happened to notice another place that is slowed downby list_delete_first. I'm using the query as below:(n=1000000;printf \"explain (summary on) select * from t where \"for ((i=1;i<$n;i++)); do printf \"a = $i or \"; done;printf \"a = $n;\") | psqlAnd I notice that a large part of planning time is spent on thelist_delete_first calls inside simplify_or_arguments().I think the issue here is clear and straightforward: list_delete_firsthas an O(N) cost due to data movement. And I believe similar issue hasbeen discussed several times before.I wonder if we can improve it by using list_delete_last instead, so Itried the following change:--- a/src/backend/optimizer/util/clauses.c+++ b/src/backend/optimizer/util/clauses.c@@ -3612,9 +3612,9 @@ simplify_or_arguments(List *args, unprocessed_args = list_copy(args); while (unprocessed_args) {- Node *arg = (Node *) linitial(unprocessed_args);+ Node *arg = (Node *) llast(unprocessed_args);- unprocessed_args = list_delete_first(unprocessed_args);+ unprocessed_args = list_delete_last(unprocessed_args);With this change, in my box the planning time for the query above isreduced from 64257.784 ms to 1411.666 ms, a big improvement. The sideeffect is that it results in a lot of plan diffs in regression tests,but they are all about different order of OR arguments.I believe simplify_and_arguments() can also benefit from similarchanges. But I'm not sure if we could have such a long AND/OR argumentsin real world. So is this worth doing?[1] https://www.postgresql.org/message-id/CAMbWs4-RXhgz0i4O1z62gt%2BbTLTM5vXYyYhgnius0j_txLH7hg%40mail.gmail.comThanksRichard",
"msg_date": "Thu, 27 Oct 2022 18:06:45 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid using list_delete_first in simplify_or/and_arguments"
}
] |
[
{
"msg_contents": "In the past, developers have wondered how we can provide \"--dry-run\"\nfunctionality\nhttps://www.postgresql.org/message-id/15791.1450383201%40sss.pgh.pa.us\n\nThis is important for application developers, especially when\nmigrating programs to Postgres.\n\nPresented here are 3 features aimed at developers, each of which is\nbeing actively used by me in a large and complex migration project.\n\n* psql --parse-only\nChecks the syntax of all SQL in a script, but without actually\nexecuting it. This is very important in the early stages of complex\nmigrations because we need to see if the code would generate syntax\nerrors before we attempt to execute it. When there are many\ndependencies between objects, actual execution fails very quickly if\nwe run in a single transaction, yet running outside of a transaction\ncan leave a difficult cleanup task. Fixing errors iteratively is\ndifficult when there are long chains of dependencies between objects,\nsince there is no easy way to predict how long it will take to make\neverything work unless you understand how many syntax errors exist in\nthe script.\n001_psql_parse_only.v1.patch\n\n* nested transactions = off (default) | all | on\nHandle nested BEGIN/COMMIT, which can cause chaos on failure. This is\nan important part of guaranteeing that everything that gets executed\nis part of a single atomic transaction, which can then be rolled back\n- this is a pre-requisite for the last feature.\n002_nested_xacts.v7.patch\nThe default behavior is unchanged (off)\nSetting \"all\" treats nested BEGIN/COMMIT as subtransactions, allowing\nsome parts to fail without rolling back the outer transaction.\nSetting \"outer\" flattens nested BEGIN/COMMIT into one single outer\ntransaction, so that any failure rolls back the entire transaction.\n\n* rollback_on_commit = off (default) | on\nForce transactions to fail their final commit, ensuring that no\nlasting change is made when a script is tested. i.e. accept COMMIT,\nbut do rollback instead.\n003_rollback_on_commit.v1.patch\n\nWe will probably want to review these on separate threads, but the\ncommon purpose of these features is hopefully clear from these notes.\n\n001 and 003 are fairly small patches, 002 is longer.\n\nComments please\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 27 Oct 2022 12:09:42 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Code checks for App Devs, using new options for transaction behavior"
},
{
"msg_contents": "On Thu, 27 Oct 2022 at 12:09, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> Comments please\n\nUpdate from patch tester results.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 27 Oct 2022 17:35:38 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "Op 27-10-2022 om 18:35 schreef Simon Riggs:\n> On Thu, 27 Oct 2022 at 12:09, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> \n>> Comments please\n> \n> Update from patch tester results.\n> \n\n > [001_psql_parse_only.v1.patch ]\n > [002_nested_xacts.v7.patch ]\n > [003_rollback_on_commit.v1.patch ]\n > [004_add_params_to_sample.v1.patch]\n\n\npatch 002 has (2x) :\n 'transction' should be\n 'transaction'\n\nalso in patch 002:\n 'at any level will be abort' should be\n 'at any level will abort'\n\nI also dislike the 'we' in\n\n 'Once we reach the top-level transaction,'\n\nThat seems a bit too much like the 'we developers working together to \nmake a database server system' which is of course used often and \nusefully on this mailinglist and in code itself. But I think \nuser-facing docs should be careful with that team-building 'we'. I \nremember well how it confused me, many years ago. Better, IMHO:\n\n 'Once the top-level transaction is reached,'\n\n\nThanks,\n\nErik Rijkers\n\n\n",
"msg_date": "Fri, 28 Oct 2022 08:54:29 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs, using new options for transaction\n behavior"
},
{
"msg_contents": "On Fri, 28 Oct 2022 at 07:54, Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Op 27-10-2022 om 18:35 schreef Simon Riggs:\n> > On Thu, 27 Oct 2022 at 12:09, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> >> Comments please\n> >\n> > Update from patch tester results.\n> >\n>\n> > [001_psql_parse_only.v1.patch ]\n> > [002_nested_xacts.v7.patch ]\n> > [003_rollback_on_commit.v1.patch ]\n> > [004_add_params_to_sample.v1.patch]\n>\n>\n> patch 002 has (2x) :\n> 'transction' should be\n> 'transaction'\n>\n> also in patch 002:\n> 'at any level will be abort' should be\n> 'at any level will abort'\n>\n> I also dislike the 'we' in\n>\n> 'Once we reach the top-level transaction,'\n>\n> That seems a bit too much like the 'we developers working together to\n> make a database server system' which is of course used often and\n> usefully on this mailinglist and in code itself. But I think\n> user-facing docs should be careful with that team-building 'we'. I\n> remember well how it confused me, many years ago. Better, IMHO:\n>\n> 'Once the top-level transaction is reached,'\n\nThanks for the feedback, I will make all of those corrections in the\nnext version.\n\nI'm guessing you like the features??\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 28 Oct 2022 10:33:25 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Fri, 28 Oct 2022 at 10:33, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> Thanks for the feedback, I will make all of those corrections in the\n> next version.\n\nNew version attached. I've rolled 002-004 into one patch, but can\nsplit again as needed.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sun, 30 Oct 2022 18:01:55 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Sun, Oct 30, 2022 at 11:32 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, 28 Oct 2022 at 10:33, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > Thanks for the feedback, I will make all of those corrections in the\n> > next version.\n>\n> New version attached. I've rolled 002-004 into one patch, but can\n> split again as needed.\n\nI like the idea of \"parse only\" and \"nested xact\", thanks for working\non this. I will look into patches in more detail, especially nested\nxact. IMHO there is no point in merging \"nested xact\" and \"rollback on\ncommit\". They might be changing the same code location but these two\nare completely different ideas, in fact all these three should be\nreviewed as three separate threads as you mentioned in the first email\nin the thread.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:23:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 4:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Oct 30, 2022 at 11:32 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Fri, 28 Oct 2022 at 10:33, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > > Thanks for the feedback, I will make all of those corrections in the\n> > > next version.\n> >\n> > New version attached. I've rolled 002-004 into one patch, but can\n> > split again as needed.\n>\n> I like the idea of \"parse only\" and \"nested xact\", thanks for working\n> on this. I will look into patches in more detail, especially nested\n> xact. IMHO there is no point in merging \"nested xact\" and \"rollback on\n> commit\". They might be changing the same code location but these two\n> are completely different ideas, in fact all these three should be\n> reviewed as three separate threads as you mentioned in the first email\n> in the thread.\n\nWhat is the behavior if \"nested_transactions\" value is changed within\na transaction execution, suppose the value was on and we have created\na few levels of nested subtransactions and within the same transaction\nI switched it to off or to outer?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:03:00 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 5:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 4:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sun, Oct 30, 2022 at 11:32 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > On Fri, 28 Oct 2022 at 10:33, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > > Thanks for the feedback, I will make all of those corrections in the\n> > > > next version.\n> > >\n> > > New version attached. I've rolled 002-004 into one patch, but can\n> > > split again as needed.\n> >\n> > I like the idea of \"parse only\" and \"nested xact\", thanks for working\n> > on this. I will look into patches in more detail, especially nested\n> > xact. IMHO there is no point in merging \"nested xact\" and \"rollback on\n> > commit\". They might be changing the same code location but these two\n> > are completely different ideas, in fact all these three should be\n> > reviewed as three separate threads as you mentioned in the first email\n> > in the thread.\n>\n> What is the behavior if \"nested_transactions\" value is changed within\n> a transaction execution, suppose the value was on and we have created\n> a few levels of nested subtransactions and within the same transaction\n> I switched it to off or to outer?\n\n1.\n@@ -3815,6 +3861,10 @@ PrepareTransactionBlock(const char *gid)\n /* Set up to commit the current transaction */\n result = EndTransactionBlock(false);\n\n+ /* Don't allow prepare until we are back to an unnested state at level 0 */\n+ if (XactNestingLevel > 0)\n+ return false;\n\n\nI am not sure whether it is good to not allow PREPARE or we can just\nprepare it and come out of the complete nested transaction. Suppose\nwe have multiple savepoints and we say prepare then it will just\nsucceed so why does it have to be different here?\n\n\n2. case TBLOCK_SUBABORT:\n ereport(WARNING,\n (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),\n errmsg(\"there is already a transaction in progress\")));\n+ if (XactNesting == XACT_NEST_OUTER)\n+ XactNestingLevel++;\n break;\n\nI did not understand what this change is for, can you tell me the\nscenario or a test case to hit this?\n\nRemaining part w.r.t \"nested xact\" patch looks fine, I haven't tested\nit yet though.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:52:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Mon, 31 Oct 2022 at 11:33, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> What is the behavior if \"nested_transactions\" value is changed within\n> a transaction execution, suppose the value was on and we have created\n> a few levels of nested subtransactions and within the same transaction\n> I switched it to off or to outer?\n\nPatch does the same dance as with other xact variables.\n\nXactNesting is the value within the transaction and in the patch this\nis not exported, so cannot be set externally.\n\nXactNesting is set at transaction start to the variable\nDefaultXactNesting, which is set by the GUC.\n\nSo its not a problem, but thanks for checking.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 31 Oct 2022 12:43:13 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Mon, 31 Oct 2022 at 12:22, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 5:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Oct 31, 2022 at 4:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Sun, Oct 30, 2022 at 11:32 PM Simon Riggs\n> > > <simon.riggs@enterprisedb.com> wrote:\n> > > >\n> > > > On Fri, 28 Oct 2022 at 10:33, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > > >\n> > > > > Thanks for the feedback, I will make all of those corrections in the\n> > > > > next version.\n> > > >\n> > > > New version attached. I've rolled 002-004 into one patch, but can\n> > > > split again as needed.\n> > >\n> > > I like the idea of \"parse only\" and \"nested xact\", thanks for working\n> > > on this. I will look into patches in more detail, especially nested\n> > > xact. IMHO there is no point in merging \"nested xact\" and \"rollback on\n> > > commit\". They might be changing the same code location but these two\n> > > are completely different ideas, in fact all these three should be\n> > > reviewed as three separate threads as you mentioned in the first email\n> > > in the thread.\n> >\n> > What is the behavior if \"nested_transactions\" value is changed within\n> > a transaction execution, suppose the value was on and we have created\n> > a few levels of nested subtransactions and within the same transaction\n> > I switched it to off or to outer?\n>\n> 1.\n> @@ -3815,6 +3861,10 @@ PrepareTransactionBlock(const char *gid)\n> /* Set up to commit the current transaction */\n> result = EndTransactionBlock(false);\n>\n> + /* Don't allow prepare until we are back to an unnested state at level 0 */\n> + if (XactNestingLevel > 0)\n> + return false;\n>\n>\n> I am not sure whether it is good to not allow PREPARE or we can just\n> prepare it and come out of the complete nested transaction. Suppose\n> we have multiple savepoints and we say prepare then it will just\n> succeed so why does it have to be different here?\n\nI'm happy to discuss what the behavior should be in this case. It is\nnot a common case,\nand people don't put PREPARE in their scripts except maybe in a test.\n\nMy reasoning for this code is that we don't want to accept a COMMIT\nuntil we reach top-level of nesting,\nso the behavior should be similar for PREPARE, which is just\nfirst-half of final commit.\n\nNote that the nesting of begin/commit is completely separate to the\nexistence/non-existence of subtransactions, especially with\nnested_transactions = 'outer'\n\n\n> 2. case TBLOCK_SUBABORT:\n> ereport(WARNING,\n> (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),\n> errmsg(\"there is already a transaction in progress\")));\n> + if (XactNesting == XACT_NEST_OUTER)\n> + XactNestingLevel++;\n> break;\n>\n> I did not understand what this change is for, can you tell me the\n> scenario or a test case to hit this?\n\nWell spotted, thanks. That seems to be some kind of artefact.\n\nThere is no test that exercises that since it is an unintended change,\nso I have removed it.\n\n\n> Remaining part w.r.t \"nested xact\" patch looks fine, I haven't tested\n> it yet though.\n\nNew versions attached, separated again as you suggested.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 31 Oct 2022 13:24:14 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 6:54 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> > > What is the behavior if \"nested_transactions\" value is changed within\n> > > a transaction execution, suppose the value was on and we have created\n> > > a few levels of nested subtransactions and within the same transaction\n> > > I switched it to off or to outer?\n\nI think you missed the above comment?\n\n> > I am not sure whether it is good to not allow PREPARE or we can just\n> > prepare it and come out of the complete nested transaction. Suppose\n> > we have multiple savepoints and we say prepare then it will just\n> > succeed so why does it have to be different here?\n>\n> I'm happy to discuss what the behavior should be in this case. It is\n> not a common case,\n> and people don't put PREPARE in their scripts except maybe in a test.\n>\n> My reasoning for this code is that we don't want to accept a COMMIT\n> until we reach top-level of nesting,\n> so the behavior should be similar for PREPARE, which is just\n> first-half of final commit.\n\nYeah this is not a very common case. And we can see opinions from\nothers as well. But I think your reasoning for doing it this way also\nmakes sense to me.\n\nI have some more comments for 0002\n1.\n+ if (XactNesting == XACT_NEST_OUTER && XactNestingLevel > 0)\n+ {\n+ /* Throw ERROR */\n+ ereport(ERROR,\n+ (errmsg(\"nested ROLLBACK, level %u aborts\nouter transaction\", XactNestingLevel--)));\n+ }\n\nI did not understand in case of 'outer' if we are giving rollback from\ninner nesting level why it is throwing error? Documentation just says\nthis[1] but it did not\nmention the error. I agree that we might need to give the rollback as\nmany times as the nesting level but giving errors seems confusing to\nme.\n\n[1]\n+ <para>\n+ A setting of <quote>outer</quote> will cause a nested\n+ <command>BEGIN</command> to be remembered, so that an equal number\n+ of <command>COMMIT</command> or <command>ROLLBACK</command> commands\n+ are required to end the nesting. In that case a\n<command>ROLLBACK</command>\n+ at any level will abort the entire outer transaction.\n+ Once we reach the top-level transaction,\n+ the final <command>COMMIT</command> will end the transaction.\n+ This ensures that all commands within the outer transaction are atomic.\n+ </para>\n\n\n2.\n\n+ if (XactNesting == XACT_NEST_OUTER)\n+ {\n+ if (XactNestingLevel <= 0)\n+ s->blockState = TBLOCK_END;\n+ else\n+ ereport(NOTICE,\n+ (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),\n+ errmsg(\"nested COMMIT, level %u\",\nXactNestingLevel)));\n+ XactNestingLevel--;\n+ return true;\n+ }\n\n+ while (s->parent != NULL && !found_subxact)\n {\n+ if (XactNesting == XACT_NEST_ALL &&\n+ XactNestingLevel > 0 &&\n+ PointerIsValid(s->name) &&\n+ strcmp(s->name, NESTED_XACT_NAME) == 0)\n+ found_subxact = true;\n+\n if (s->blockState == TBLOCK_SUBINPROGRESS)\n s->blockState = TBLOCK_SUBCOMMIT\n\nI think these changes should be explained in the comments.\n\n3.\n\n+ if (XactNesting == XACT_NEST_OUTER)\n+ {\n+ if (XactNestingLevel > 0)\n+ {\n+ ereport(NOTICE,\n+ (errmsg(\"nested COMMIT, level %u in\naborted transaction\", XactNestingLevel)));\n+ XactNestingLevel--;\n+ return false;\n+ }\n+ }\n\nBetter to write this as if (XactNesting == XACT_NEST_OUTER &&\nXactNestingLevel > 0) instead of two levels nested if conditions.\n\n4.\n+ if (XactNesting == XACT_NEST_ALL &&\n+ XactNestingLevel > 0 &&\n+ PointerIsValid(s->name) &&\n+ strcmp(s->name, NESTED_XACT_NAME) == 0)\n+ found_subxact = true;\n\nI think this strcmp(s->name, NESTED_XACT_NAME) is done because there\ncould be other types of internal subtransaction also like savepoints?\nWhat will be the behavior if someone declares a savepoint with this\nname (\"_internal_nested_xact\"). Will this interfere with this new\nfunctionality? Have we tested scenarios like that?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Nov 2022 09:22:32 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Wed, 2 Nov 2022 at 03:52, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Oct 31, 2022 at 6:54 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > > > What is the behavior if \"nested_transactions\" value is changed within\n> > > > a transaction execution, suppose the value was on and we have created\n> > > > a few levels of nested subtransactions and within the same transaction\n> > > > I switched it to off or to outer?\n>\n> I think you missed the above comment?\n\n[copy of earlier reply]\n\nPatch does the same dance as with other xact variables.\n\nXactNesting is the value within the transaction and in the patch this\nis not exported, so cannot be set externally.\n\nXactNesting is set at transaction start to the variable\nDefaultXactNesting, which is set by the GUC.\n\nSo its not a problem, but thanks for checking.\n\n> > > I am not sure whether it is good to not allow PREPARE or we can just\n> > > prepare it and come out of the complete nested transaction. Suppose\n> > > we have multiple savepoints and we say prepare then it will just\n> > > succeed so why does it have to be different here?\n> >\n> > I'm happy to discuss what the behavior should be in this case. It is\n> > not a common case,\n> > and people don't put PREPARE in their scripts except maybe in a test.\n> >\n> > My reasoning for this code is that we don't want to accept a COMMIT\n> > until we reach top-level of nesting,\n> > so the behavior should be similar for PREPARE, which is just\n> > first-half of final commit.\n>\n> Yeah this is not a very common case. And we can see opinions from\n> others as well. But I think your reasoning for doing it this way also\n> makes sense to me.\n>\n> I have some more comments for 0002\n> 1.\n> + if (XactNesting == XACT_NEST_OUTER && XactNestingLevel > 0)\n> + {\n> + /* Throw ERROR */\n> + ereport(ERROR,\n> + (errmsg(\"nested ROLLBACK, level %u aborts\n> outer transaction\", XactNestingLevel--)));\n> + }\n>\n> I did not understand in case of 'outer' if we are giving rollback from\n> inner nesting level why it is throwing error? Documentation just says\n> this[1] but it did not\n> mention the error. I agree that we might need to give the rollback as\n> many times as the nesting level but giving errors seems confusing to\n> me.\n\nDocs mention ROLLBACK at any level will abort the transaction, which\nis what the ERROR does.\n\n> [1]\n> + <para>\n> + A setting of <quote>outer</quote> will cause a nested\n> + <command>BEGIN</command> to be remembered, so that an equal number\n> + of <command>COMMIT</command> or <command>ROLLBACK</command> commands\n> + are required to end the nesting. In that case a\n> <command>ROLLBACK</command>\n> + at any level will abort the entire outer transaction.\n> + Once we reach the top-level transaction,\n> + the final <command>COMMIT</command> will end the transaction.\n> + This ensures that all commands within the outer transaction are atomic.\n> + </para>\n>\n>\n> 2.\n>\n> + if (XactNesting == XACT_NEST_OUTER)\n> + {\n> + if (XactNestingLevel <= 0)\n> + s->blockState = TBLOCK_END;\n> + else\n> + ereport(NOTICE,\n> + (errcode(ERRCODE_ACTIVE_SQL_TRANSACTION),\n> + errmsg(\"nested COMMIT, level %u\",\n> XactNestingLevel)));\n> + XactNestingLevel--;\n> + return true;\n> + }\n\nThis is decrementing the nesting level for XACT_NEST_OUTER,\nuntil we reach the top level, when the commit is allowed.\n\n> + while (s->parent != NULL && !found_subxact)\n> {\n> + if (XactNesting == XACT_NEST_ALL &&\n> + XactNestingLevel > 0 &&\n> + PointerIsValid(s->name) &&\n> + strcmp(s->name, NESTED_XACT_NAME) == 0)\n> + found_subxact = true;\n> +\n> if (s->blockState == TBLOCK_SUBINPROGRESS)\n> s->blockState = TBLOCK_SUBCOMMIT\n>\n> I think these changes should be explained in the comments.\n\nThis locates the correct subxact by name, as you mention in (4)\n\n> 3.\n>\n> + if (XactNesting == XACT_NEST_OUTER)\n> + {\n> + if (XactNestingLevel > 0)\n> + {\n> + ereport(NOTICE,\n> + (errmsg(\"nested COMMIT, level %u in\n> aborted transaction\", XactNestingLevel)));\n> + XactNestingLevel--;\n> + return false;\n> + }\n> + }\n>\n> Better to write this as if (XactNesting == XACT_NEST_OUTER &&\n> XactNestingLevel > 0) instead of two levels nested if conditions.\n\nSure. I had been aiming for clarity.\n\n> 4.\n> + if (XactNesting == XACT_NEST_ALL &&\n> + XactNestingLevel > 0 &&\n> + PointerIsValid(s->name) &&\n> + strcmp(s->name, NESTED_XACT_NAME) == 0)\n> + found_subxact = true;\n>\n> I think this strcmp(s->name, NESTED_XACT_NAME) is done because there\n> could be other types of internal subtransaction also like savepoints?\n\nIn XACT_NEST_ALL mode, each nested subxact that is created needs a name.\nThe name is used to ensure we roll back to the correct subxact, which\nmight exist.\n\n> What will be the behavior if someone declares a savepoint with this\n> name (\"_internal_nested_xact\"). Will this interfere with this new\n> functionality?\n\nClearly! I guess you are saying we should disallow them.\n\n> Have we tested scenarios like that?\n\nNo, but that can be done.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 2 Nov 2022 07:40:16 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Wed, 2 Nov 2022 at 07:40, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n\n> > What will be the behavior if someone declares a savepoint with this\n> > name (\"_internal_nested_xact\"). Will this interfere with this new\n> > functionality?\n>\n> Clearly! I guess you are saying we should disallow them.\n>\n> > Have we tested scenarios like that?\n>\n> No, but that can be done.\n\nMore tests as requested, plus minor code rework, plus comment updates.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 7 Nov 2022 14:25:48 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Mon, 7 Nov 2022 at 14:25, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Wed, 2 Nov 2022 at 07:40, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> > > What will be the behavior if someone declares a savepoint with this\n> > > name (\"_internal_nested_xact\"). Will this interfere with this new\n> > > functionality?\n> >\n> > Clearly! I guess you are saying we should disallow them.\n> >\n> > > Have we tested scenarios like that?\n> >\n> > No, but that can be done.\n>\n> More tests as requested, plus minor code rework, plus comment updates.\n\nNew versions\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 22 Nov 2022 16:02:40 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "On Thu, 27 Oct 2022 at 07:10, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> In the past, developers have wondered how we can provide \"--dry-run\"\n> functionality\n\nThat would be an awesome functionality, indeed. I have concerns of how\nfeasible it is in general but I think providing features to allow\ndevelopers to build it for their use cases is a good approach. The\ncorner cases that might not be possible in general might be tractable\ndevelopers willing to constrain their development environment or use\ninformation available outside Postgres.\n\nBut... I have concerns about some of the design here.\n\n> * psql --parse-only\n> Checks the syntax of all SQL in a script, but without actually\n> executing it. This is very important in the early stages of complex\n> migrations because we need to see if the code would generate syntax\n> errors before we attempt to execute it. When there are many\n> dependencies between objects, actual execution fails very quickly if\n> we run in a single transaction, yet running outside of a transaction\n> can leave a difficult cleanup task. Fixing errors iteratively is\n> difficult when there are long chains of dependencies between objects,\n> since there is no easy way to predict how long it will take to make\n> everything work unless you understand how many syntax errors exist in\n> the script.\n> 001_psql_parse_only.v1.patch\n\nThis effectively enables \\gdesc mode for every query. It needs docs\nexplaining what's actually going to happen and how to use it because\nthat wasn't super obvious to me even after reading the patch. I'm not\nsure reusing DescribeQuery() and then returning early is the best\nidea.\n\nBut more importantly it's only going to handle the simplest scripts\nthat don't do DDL that further statements will depend on. That at\nleast needs to be documented.\n\n\n> * nested transactions = off (default) | all | on\n> Handle nested BEGIN/COMMIT, which can cause chaos on failure. This is\n> an important part of guaranteeing that everything that gets executed\n> is part of a single atomic transaction, which can then be rolled back\n> - this is a pre-requisite for the last feature.\n> 002_nested_xacts.v7.patch\n> The default behavior is unchanged (off)\n> Setting \"all\" treats nested BEGIN/COMMIT as subtransactions, allowing\n> some parts to fail without rolling back the outer transaction.\n> Setting \"outer\" flattens nested BEGIN/COMMIT into one single outer\n> transaction, so that any failure rolls back the entire transaction.\n\nI think we've been burned pretty badly by GUCs that control SQL\nsemantics before. I think there was discussion at the time nested\ntransactions went in and there must have been a reason we did\nSAVEPOINT rather than make nested BEGINs do things like this. But\nregardless if we do want to change what nested BEGINs do I think we\nhave to decide what behaviour we want, think about the backwards\ncompatibility impacts, and make the change. We can't make it just for\nsome people some of the time based on a GUC. Doing that makes it\nimpossible to write scripts that work consistently.\n\nI'm not clear what happens if you have this feature enabled and *also*\nuse SAVEPOINTs...\n\nYou say this is a prerequisite for 003 and I see how they're related\nthough I don't immediately see why it should be necessary to change\nnested BEGIN behaviour to make that work.\n\n> * rollback_on_commit = off (default) | on\n> Force transactions to fail their final commit, ensuring that no\n> lasting change is made when a script is tested. i.e. accept COMMIT,\n> but do rollback instead.\n> 003_rollback_on_commit.v1.patch\n\nI suppose technically this is also a \"semantics controlled by a GUC\"\nbut I guess it's safe since you would only set this when you want this\ndebugging environment and then you really do want it to be global.\n\nI'm not sure it's super safe though. Like, dblink connections can\nbreak it, what happens if it gets turned off midway through a\ntransaction?\n\nI wonder if this should be handled by the client itself the way\nautocommit is. Like have an \"autocommit\" mode of \"autorollback\"\ninstead. That would mean having to add it to every client library of\ncourse but then perhaps it would be able to change client behaviour in\nways that make sense at the same time.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:05:23 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> On Thu, 27 Oct 2022 at 07:10, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> * nested transactions = off (default) | all | on\n>> Handle nested BEGIN/COMMIT, which can cause chaos on failure.\n\n> I think we've been burned pretty badly by GUCs that control SQL\n> semantics before.\n\nYeah, this idea is an absolute nonstarter. rollback_on_commit seems\nexcessively dangerous as well compared to the value.\n\n> I think there was discussion at the time nested\n> transactions went in and there must have been a reason we did\n> SAVEPOINT rather than make nested BEGINs do things like this.\n\nI believe the reason was \"because the SQL standard says so\".\n\nI'm not sure if any of these proposals are still live now that\nSimon's retired. Presumably somebody else would have to push\nthem forward for there to be a chance of anything happening.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:43:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Code checks for App Devs,\n using new options for transaction behavior"
}
] |
[
{
"msg_contents": "Hi,\n\n\nWe would like to share a proposal of a patch, where we have added order by\nclause in two select statements in src/test/regress/sql/insert.sql file and\nrespective changes in src/test/regress/expected/insert.out output file.\n\nThis would help in generating output in consistent sequence, as sometimes\nwe have observed change in sequence in output.\n\nPlease find the patch attached <Proposal_OrderBy_insert.sql.out.patch>\n\n\nRegards,\nNishant Sharma\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 Oct 2022 18:21:00 +0530",
"msg_from": "Nishant Sharma <nishant.sharma@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PROPOSAL] : Use of ORDER BY clause in insert.sql"
},
{
"msg_contents": "Nishant Sharma <nishant.sharma@enterprisedb.com> writes:\n> We would like to share a proposal of a patch, where we have added order by\n> clause in two select statements in src/test/regress/sql/insert.sql file and\n> respective changes in src/test/regress/expected/insert.out output file.\n\n> This would help in generating output in consistent sequence, as sometimes\n> we have observed change in sequence in output.\n\nPlease be specific about the circumstances in which the output is\nunstable for you. With zero information to go on, it seems about as\nlikely that this change is masking a bug as that it's a good idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Oct 2022 09:24:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] : Use of ORDER BY clause in insert.sql"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 6:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nishant Sharma <nishant.sharma@enterprisedb.com> writes:\n> > We would like to share a proposal of a patch, where we have added order by\n> > clause in two select statements in src/test/regress/sql/insert.sql file and\n> > respective changes in src/test/regress/expected/insert.out output file.\n>\n> > This would help in generating output in consistent sequence, as sometimes\n> > we have observed change in sequence in output.\n>\n> Please be specific about the circumstances in which the output is\n> unstable for you. With zero information to go on, it seems about as\n> likely that this change is masking a bug as that it's a good idea.\n>\n\nAt the first glance, I thought the patch is pretty much obvious, and\nwe usually add an ORDER BY clause to ensure stable output. If we\nare too sure that the output usually comes in the same order then the\nORDER BY clause that exists in other tests seems useless. I am a bit\nconfused & what could be a possible bug?\n\nI have tested on my Centos and the Mac OS, insert.sql test is giving\nstable output, I didn't find failure in the subsequent runs too but I\nam not sure if that is enough evidence to skip the ORDER BY clause.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 28 Oct 2022 09:20:33 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] : Use of ORDER BY clause in insert.sql"
},
{
"msg_contents": "On Fri, 28 Oct 2022 at 16:51, Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Oct 27, 2022 at 6:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Please be specific about the circumstances in which the output is\n> > unstable for you. With zero information to go on, it seems about as\n> > likely that this change is masking a bug as that it's a good idea.\n> >\n>\n> At the first glance, I thought the patch is pretty much obvious, and\n> we usually add an ORDER BY clause to ensure stable output.\n\nUnfortunately, you'll need to do better than that. We're not in the\nbusiness of accepting patches with zero justification for why they're\nrequired. If you're not willing to do the analysis on why the order\nchanges sometimes, why should we accept your patch?\n\nIf you can't find the problem then you should modify insert.sql to\nEXPLAIN the problem query to see if the plan has changed between the\npassing and failing run. The only thing that comes to mind about why\nthis test might produce rows in a different order would be if a\nparallel Append was sorting the subpaths by cost (See\ncreate_append_path's call to list_sort) and the costs were for some\nreason coming out differently sometimes. It's hard to imagine why this\nquery would be parallelised though. If you show us the EXPLAIN from a\npassing and failing run, it might help us see the problem.\n\n> If we\n> are too sure that the output usually comes in the same order then the\n> ORDER BY clause that exists in other tests seems useless. I am a bit\n> confused & what could be a possible bug?\n\nYou can't claim that if this test shouldn't get an ORDER BY that all\ntests shouldn't have an ORDER BY. That's just crazy. What if the test\nis doing something like testing sort?!\n\nDavid\n\n\n",
"msg_date": "Fri, 28 Oct 2022 17:58:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] : Use of ORDER BY clause in insert.sql"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 28 Oct 2022 at 16:51, Amul Sul <sulamul@gmail.com> wrote:\n>> If we\n>> are too sure that the output usually comes in the same order then the\n>> ORDER BY clause that exists in other tests seems useless. I am a bit\n>> confused & what could be a possible bug?\n\n> You can't claim that if this test shouldn't get an ORDER BY that all\n> tests shouldn't have an ORDER BY. That's just crazy. What if the test\n> is doing something like testing sort?!\n\nThe general policy is that we'll add ORDER BY when a test is demonstrated\nto have unstable output order for identifiable environmental reasons\n(e.g. locale dependency) or timing reasons (e.g. background autovacuum\nsometimes changing statistics). But the key word there is \"identifiable\".\nWithout some evidence as to what's causing this, it remains possible\nthat it's a code bug not the fault of the test case.\n\nregress.sgml explains the policy further:\n\n You might wonder why we don't order all the regression test queries explicitly\n to get rid of this issue once and for all. The reason is that that would\n make the regression tests less useful, not more, since they'd tend\n to exercise query plan types that produce ordered results to the\n exclusion of those that don't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Oct 2022 01:13:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] : Use of ORDER BY clause in insert.sql"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 10:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 28 Oct 2022 at 16:51, Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Thu, Oct 27, 2022 at 6:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Please be specific about the circumstances in which the output is\n> > > unstable for you. With zero information to go on, it seems about as\n> > > likely that this change is masking a bug as that it's a good idea.\n> > >\n> >\n> > At the first glance, I thought the patch is pretty much obvious, and\n> > we usually add an ORDER BY clause to ensure stable output.\n>\n> Unfortunately, you'll need to do better than that. We're not in the\n> business of accepting patches with zero justification for why they're\n> required. If you're not willing to do the analysis on why the order\n> changes sometimes, why should we accept your patch?\n>\n\nUnfortunately the test is not failing at me. Otherwise, I would have\ndone that analysis. When I saw the patch for the first time, somehow,\nI didn't think anything spurious due to my misconception that we\nusually add the ORDER BY clause for the select queries just to be\nsure.\n\n> If you can't find the problem then you should modify insert.sql to\n> EXPLAIN the problem query to see if the plan has changed between the\n> passing and failing run. The only thing that comes to mind about why\n> this test might produce rows in a different order would be if a\n> parallel Append was sorting the subpaths by cost (See\n> create_append_path's call to list_sort) and the costs were for some\n> reason coming out differently sometimes. It's hard to imagine why this\n> query would be parallelised though. If you show us the EXPLAIN from a\n> passing and failing run, it might help us see the problem.\n>\n\nUnderstood.\n\n> > If we\n> > are too sure that the output usually comes in the same order then the\n> > ORDER BY clause that exists in other tests seems useless. I am a bit\n> > confused & what could be a possible bug?\n>\n> You can't claim that if this test shouldn't get an ORDER BY that all\n> tests shouldn't have an ORDER BY. That's just crazy. What if the test\n> is doing something like testing sort?!\n>\n\nThat I can understand that the sorted output doesn't need further\nsorting. I am just referring to the simple SELECT queries that do not\nhave any sorting.\n\nThanks & Regards,\nAmul\n\n\n",
"msg_date": "Fri, 28 Oct 2022 11:23:23 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] : Use of ORDER BY clause in insert.sql"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 10:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Fri, 28 Oct 2022 at 16:51, Amul Sul <sulamul@gmail.com> wrote:\n> >> If we\n> >> are too sure that the output usually comes in the same order then the\n> >> ORDER BY clause that exists in other tests seems useless. I am a bit\n> >> confused & what could be a possible bug?\n>\n> > You can't claim that if this test shouldn't get an ORDER BY that all\n> > tests shouldn't have an ORDER BY. That's just crazy. What if the test\n> > is doing something like testing sort?!\n>\n> The general policy is that we'll add ORDER BY when a test is demonstrated\n> to have unstable output order for identifiable environmental reasons\n> (e.g. locale dependency) or timing reasons (e.g. background autovacuum\n> sometimes changing statistics). But the key word there is \"identifiable\".\n> Without some evidence as to what's causing this, it remains possible\n> that it's a code bug not the fault of the test case.\n>\n> regress.sgml explains the policy further:\n>\n> You might wonder why we don't order all the regression test queries explicitly\n> to get rid of this issue once and for all. The reason is that that would\n> make the regression tests less useful, not more, since they'd tend\n> to exercise query plan types that produce ordered results to the\n> exclusion of those that don't.\n>\n\nUnderstood. Thanks for the clarification.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Fri, 28 Oct 2022 11:23:55 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PROPOSAL] : Use of ORDER BY clause in insert.sql"
}
] |
[
{
"msg_contents": "Hi,\n\nTab completion for ALTER FUNCTION/PROCEDURE/ROUTINE action was\nmissing, this patch adds the tab completion for the same.\n\nRegards,\nVignesh",
"msg_date": "Thu, 27 Oct 2022 20:38:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 12:08 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> Tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE action was\n> missing, this patch adds the tab completion for the same.\n>\n> Regards,\n> Vignesh\n\nHi,\nI applied your patch and did some tests.\nIs it okay not to consider SET and RESET commands? (e.g ALTER FUNCTION)\n\n---\nRegards,\nDongWook Lee.\n\n\n",
"msg_date": "Fri, 28 Oct 2022 11:32:20 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Fri, 28 Oct 2022 at 08:02, Dong Wook Lee <sh95119@gmail.com> wrote:\n>\n> On Fri, Oct 28, 2022 at 12:08 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE action was\n> > missing, this patch adds the tab completion for the same.\n> >\n> > Regards,\n> > Vignesh\n>\n> Hi,\n> I applied your patch and did some tests.\n> Is it okay not to consider SET and RESET commands? (e.g ALTER FUNCTION)\n\nThose also should be handled, attached v2 version includes the changes\nfor the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 28 Oct 2022 17:34:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 05:34:37PM +0530, vignesh C wrote:\n> Those also should be handled, attached v2 version includes the changes\n> for the same.\n\nThe basic options supported by PROCEDURE are a subset of ROUTINE with a\ndifference of COST, IMMUTABLE, [NOT] LEAKPROOF, ROWS, STABLE \nand VOLATILE.\n\nThe basic options supported by ROUTINE are a subset of FUNCTION with a\ndifference of { CALLED | RETURNS NULL } ON NULL INPUT, STRICT and\nSUPPORT. Is it worth refactoring a bit with common lists?\n\n+ \"RESET\", \"RETURNS NULL ON NULL INPUT \", \"ROWS\",\nExtra space after INPUT here, that's easy to miss.\n--\nMichael",
"msg_date": "Tue, 22 Nov 2022 09:29:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Tue, 22 Nov 2022 at 05:59, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 28, 2022 at 05:34:37PM +0530, vignesh C wrote:\n> > Those also should be handled, attached v2 version includes the changes\n> > for the same.\n>\n> The basic options supported by PROCEDURE are a subset of ROUTINE with a\n> difference of COST, IMMUTABLE, [NOT] LEAKPROOF, ROWS, STABLE\n> and VOLATILE.\n>\n> The basic options supported by ROUTINE are a subset of FUNCTION with a\n> difference of { CALLED | RETURNS NULL } ON NULL INPUT, STRICT and\n> SUPPORT. Is it worth refactoring a bit with common lists?\n\nModified\n\n> + \"RESET\", \"RETURNS NULL ON NULL INPUT \", \"ROWS\",\n> Extra space after INPUT here, that's easy to miss.\n\nGood catch, the attached v3 patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Tue, 22 Nov 2022 11:48:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "Hi Vignesh,\n\nLooks like the patch needs a rebase.\n\nAlso one little suggestion:\n\n+ if (ends_with(prev_wd, ')'))\n> + COMPLETE_WITH(Alter_routine_options, \"CALLED ON NULL INPUT\",\n> + \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\");\n\n\nWhat do you think about gathering FUNCTION options as you did with ROUTINE\noptions.\nSomething like the following would seem nicer, I think.\n\n#define Alter_function_options \\\n> Alter_routine_options, \"CALLED ON NULL INPUT\", \\\n\n\"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\"\n\n\nBest,\n--\nMelih Mutlu\nMicrosoft\n\nHi Vignesh,Looks like the patch needs a rebase.Also one little suggestion:+\t\tif (ends_with(prev_wd, ')'))+\t\t\tCOMPLETE_WITH(Alter_routine_options, \"CALLED ON NULL INPUT\",+\t\t\t\t\t\t \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\");What do you think about gathering FUNCTION options as you did with ROUTINE options. Something like the following would seem nicer, I think.#define Alter_function_options \\Alter_routine_options, \"CALLED ON NULL INPUT\", \\\"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\"Best,--Melih MutluMicrosoft",
"msg_date": "Tue, 6 Dec 2022 18:12:38 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 20:42, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> Looks like the patch needs a rebase.\n\nRebased\n\n> Also one little suggestion:\n>\n>> + if (ends_with(prev_wd, ')'))\n>> + COMPLETE_WITH(Alter_routine_options, \"CALLED ON NULL INPUT\",\n>> + \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\");\n>\n>\n> What do you think about gathering FUNCTION options as you did with ROUTINE options.\n> Something like the following would seem nicer, I think.\n>\n>> #define Alter_function_options \\\n>> Alter_routine_options, \"CALLED ON NULL INPUT\", \\\n>>\n>> \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\"\n\nI did not make it as a macro for alter function options as it is used\nonly in one place whereas the others were required in more than one\nplace.\nThe attached v4 patch is rebased on top of HEAD.\n\nRegards,\nVignesh",
"msg_date": "Wed, 7 Dec 2022 00:41:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "Hi,\n\nvignesh C <vignesh21@gmail.com>, 6 Ara 2022 Sal, 22:12 tarihinde şunu yazdı:\n\n> I did not make it as a macro for alter function options as it is used\n> only in one place whereas the others were required in more than one\n> place.\n>\n\nOkay, makes sense.\n\nI tested the patch and it worked for me.\n\nBest,\n--\nMelih Mutlu\nMicrosoft\n\nHi,vignesh C <vignesh21@gmail.com>, 6 Ara 2022 Sal, 22:12 tarihinde şunu yazdı:I did not make it as a macro for alter function options as it is used\nonly in one place whereas the others were required in more than one\nplace. Okay, makes sense. I tested the patch and it worked for me.Best,--Melih MutluMicrosoft",
"msg_date": "Wed, 7 Dec 2022 11:55:24 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 19:12, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 6 Dec 2022 at 20:42, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> >\n> > Also one little suggestion:\n> >\n> >> + if (ends_with(prev_wd, ')'))\n> >> + COMPLETE_WITH(Alter_routine_options, \"CALLED ON NULL INPUT\",\n> >> + \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\");\n> >\n> > What do you think about gathering FUNCTION options as you did with ROUTINE options.\n> > Something like the following would seem nicer, I think.\n> >\n> >> #define Alter_function_options \\\n> >> Alter_routine_options, \"CALLED ON NULL INPUT\", \\\n> >> \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\"\n>\n> I did not make it as a macro for alter function options as it is used\n> only in one place whereas the others were required in more than one\n> place.\n\nMy feeling is that having this macro somewhat improves readability and\nconsistency between the 3 cases, so I think it's worth it, even if\nit's only used once.\n\nI think it slightly improves readability to keep all the arguments to\nMatches() on one line, and that seems to be the style elsewhere, even\nif that makes the line longer than 80 characters.\n\nAlso in the interests of readability, I think it's slightly easier to\nfollow if the \"ALTER PROCEDURE <name> (...)\" and \"ALTER ROUTINE <name>\n(...)\" cases are made to immediately follow the \"ALTER FUNCTION <name>\n(...)\" case, with the longer/more complex cases following on from\nthat.\n\nThat leads to the attached, which barring objections, I'll push shortly.\n\nWhile playing around with this, I noticed that the \"... SET SCHEMA\"\ncase offers \"FROM CURRENT\" and \"TO\" as completions, which is\nincorrect. It should really offer to complete with a list of schemas.\nHowever, since that's a pre-existing bug in a different region of the\ncode, I think it's best addressed in a separate patch, which probably\nought to be back-patched.\n\nRegards,\nDean",
"msg_date": "Thu, 5 Jan 2023 12:52:30 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 18:22, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Tue, 6 Dec 2022 at 19:12, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, 6 Dec 2022 at 20:42, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> > >\n> > > Also one little suggestion:\n> > >\n> > >> + if (ends_with(prev_wd, ')'))\n> > >> + COMPLETE_WITH(Alter_routine_options, \"CALLED ON NULL INPUT\",\n> > >> + \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\");\n> > >\n> > > What do you think about gathering FUNCTION options as you did with ROUTINE options.\n> > > Something like the following would seem nicer, I think.\n> > >\n> > >> #define Alter_function_options \\\n> > >> Alter_routine_options, \"CALLED ON NULL INPUT\", \\\n> > >> \"RETURNS NULL ON NULL INPUT\", \"STRICT\", \"SUPPORT\"\n> >\n> > I did not make it as a macro for alter function options as it is used\n> > only in one place whereas the others were required in more than one\n> > place.\n>\n> My feeling is that having this macro somewhat improves readability and\n> consistency between the 3 cases, so I think it's worth it, even if\n> it's only used once.\n>\n> I think it slightly improves readability to keep all the arguments to\n> Matches() on one line, and that seems to be the style elsewhere, even\n> if that makes the line longer than 80 characters.\n>\n> Also in the interests of readability, I think it's slightly easier to\n> follow if the \"ALTER PROCEDURE <name> (...)\" and \"ALTER ROUTINE <name>\n> (...)\" cases are made to immediately follow the \"ALTER FUNCTION <name>\n> (...)\" case, with the longer/more complex cases following on from\n> that.\n>\n> That leads to the attached, which barring objections, I'll push shortly.\n\nThe changes look good to me.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 08:07:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Fri, 6 Jan 2023 at 02:38, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 5 Jan 2023 at 18:22, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > That leads to the attached, which barring objections, I'll push shortly.\n>\n> The changes look good to me.\n>\n\nPushed.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 6 Jan 2023 10:03:22 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 12:52, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> While playing around with this, I noticed that the \"... SET SCHEMA\"\n> case offers \"FROM CURRENT\" and \"TO\" as completions, which is\n> incorrect. It should really offer to complete with a list of schemas.\n> However, since that's a pre-existing bug in a different region of the\n> code, I think it's best addressed in a separate patch, which probably\n> ought to be back-patched.\n>\n\nOK, I've pushed and back-patched a fix for that issue too.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:28:10 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
},
{
"msg_contents": "On Fri, 6 Jan 2023 at 15:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 6 Jan 2023 at 02:38, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, 5 Jan 2023 at 18:22, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > >\n> > > That leads to the attached, which barring objections, I'll push shortly.\n> >\n> > The changes look good to me.\n> >\n>\n> Pushed.\n\nThanks for pushing this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 7 Jan 2023 17:56:34 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE"
}
] |
[
{
"msg_contents": "Hi,\n\nI am working on posting a patch series making relation extension more\nscalable. As part of that I was running some benchmarks for workloads that I\nthought should not or just positively impacted - but I was wrong, there was\nsome very significant degradation at very high client counts. After pulling my\nhair out for quite a while to try to understand that behaviour, I figured out\nthat it's just a side-effect of *removing* some other contention. This\nmorning, turns out sleeping helps, I managed to reproduce it in an unmodified\npostgres.\n\n$ cat ~/tmp/txid.sql\nSELECT txid_current();\n$ for c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n \"$c \";pgbench -n -M prepared -f ~/tmp/txid.sql -c$c -j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n1 60174\n2 116169\n4 208119\n8 373685\n16 515247\n32 554726\n64 497508\n128 415097\n256 334923\n512 243679\n768 192959\n1024 157734\n2048 82904\n4096 32007\n\n(I didn't properly round TPS, but that doesn't matter here)\n\n\nPerformance completely falls off a cliff starting at ~256 clients. There's\nactually plenty CPU available here, so this isn't a case of running out of\nCPU time.\n\nRather, the problem is very bad contention on the \"spinlock\" for the lwlock\nwait list. I realized that something in that direction was off when trying to\ninvestigate why I was seeing spin delays of substantial duration (>100ms).\n\nThe problem isn't a fundamental issue with lwlocks, it's that\nLWLockDequeueSelf() does this:\n\n LWLockWaitListLock(lock);\n\n /*\n * Can't just remove ourselves from the list, but we need to iterate over\n * all entries as somebody else could have dequeued us.\n */\n proclist_foreach_modify(iter, &lock->waiters, lwWaitLink)\n {\n if (iter.cur == MyProc->pgprocno)\n {\n found = true;\n proclist_delete(&lock->waiters, iter.cur, lwWaitLink);\n break;\n }\n }\n\nI.e. it iterates over the whole waitlist to \"find itself\". The longer the\nwaitlist gets, the longer this takes. And the longer it takes for\nLWLockWakeup() to actually wake up all waiters, the more likely it becomes\nthat LWLockDequeueSelf() needs to be called.\n\n\nWe can't make the trivial optimization and use proclist_contains(), because\nPGPROC->lwWaitLink is also used for the list of processes to wake up in\nLWLockWakeup().\n\nBut I think we can solve that fairly reasonably nonetheless. We can change\nPGPROC->lwWaiting to not just be a boolean, but have three states:\n0: not waiting\n1: waiting in waitlist\n2: waiting to be woken up\n\nwhich we then can use in LWLockDequeueSelf() to only remove ourselves from the\nlist if we're on it. As removal from that list is protected by the wait list\nlock, there's no race to worry about.\n\nclient patched HEAD\n1 60109 60174\n2 112694 116169\n4 214287 208119\n8 377459 373685\n16 524132 515247\n32 565772 554726\n64 587716 497508\n128 581297 415097\n256 550296 334923\n512 486207 243679\n768 449673 192959\n1024 410836 157734\n2048 326224 82904\n4096 250252 32007\n\nNot perfect with the patch, but not awful either.\n\n\nI suspect this issue might actually explain quite a few odd performance\nbehaviours we've seen at the larger end in the past. I think it has gotten a\nbit worse with the conversion of lwlock.c to proclists (I see lots of\nexpensive multiplications to deal with sizeof(PGPROC)), but otherwise likely\nexists at least as far back as ab5194e6f61, in 9.5.\n\nI guess there's an argument for considering this a bug that we should\nbackpatch a fix for? But given the vintage, probably not? The only thing that\ngives me pause is that this is quite hard to pinpoint as happening.\n\n\nI've attached my quick-and-dirty patch. Obviously it'd need a few defines etc,\nbut I wanted to get this out to discuss before spending further time.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 27 Oct 2022 09:59:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "At Thu, 27 Oct 2022 09:59:14 -0700, Andres Freund <andres@anarazel.de> wrote in \n> But I think we can solve that fairly reasonably nonetheless. We can change\n> PGPROC->lwWaiting to not just be a boolean, but have three states:\n> 0: not waiting\n> 1: waiting in waitlist\n> 2: waiting to be woken up\n> \n> which we then can use in LWLockDequeueSelf() to only remove ourselves from the\n> list if we're on it. As removal from that list is protected by the wait list\n> lock, there's no race to worry about.\n\nSince LWLockDequeueSelf() is defined to be called in some restricted\nsituation, there's no chance that the proc to remove is in another\nlock waiters list at the time the function is called. So it seems to\nwork well. It is simple and requires no additional memory or cycles...\n\nNo. It enlarges PRPC by 8 bytes, but changing lwWaiting to int8/uint8\nkeeps the size as it is. (Rocky8/x86-64)\n\nIt just shaves off looping cycles. So +1 for what the patch does.\n\n\n> client patched HEAD\n> 1 60109 60174\n> 2 112694 116169\n> 4 214287 208119\n> 8 377459 373685\n> 16 524132 515247\n> 32 565772 554726\n> 64 587716 497508\n> 128 581297 415097\n> 256 550296 334923\n> 512 486207 243679\n> 768 449673 192959\n> 1024 410836 157734\n> 2048 326224 82904\n> 4096 250252 32007\n> \n> Not perfect with the patch, but not awful either.\n\nFairly good? Agreed. The performance peak is improved by 6% and\nshifted to larger number of clients (32->128).\n\n> I suspect this issue might actually explain quite a few odd performance\n> behaviours we've seen at the larger end in the past. I think it has gotten a\n> bit worse with the conversion of lwlock.c to proclists (I see lots of\n> expensive multiplications to deal with sizeof(PGPROC)), but otherwise likely\n> exists at least as far back as ab5194e6f61, in 9.5.\n>\n> I guess there's an argument for considering this a bug that we should\n> backpatch a fix for? But given the vintage, probably not? The only thing that\n> gives me pause is that this is quite hard to pinpoint as happening.\n\nI don't think this is a bug but I think it might be back-patchable\nsince it doesn't change memory footprint (if adjusted), causes no\nadditional cost or interfarce breakage, thus it might be ok to\nbackpatch. Since this \"bug\" has the nature of positive-feedback so\nreducing the coefficient is benetifical than the direct cause of the\nchange.\n\n> I've attached my quick-and-dirty patch. Obviously it'd need a few defines etc,\n> but I wanted to get this out to discuss before spending further time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 31 Oct 2022 14:32:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 11:03 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 27 Oct 2022 09:59:14 -0700, Andres Freund <andres@anarazel.de> wrote in\n> > But I think we can solve that fairly reasonably nonetheless. We can change\n> > PGPROC->lwWaiting to not just be a boolean, but have three states:\n> > 0: not waiting\n> > 1: waiting in waitlist\n> > 2: waiting to be woken up\n> >\n> > which we then can use in LWLockDequeueSelf() to only remove ourselves from the\n> > list if we're on it. As removal from that list is protected by the wait list\n> > lock, there's no race to worry about.\n\nThis looks like a good idea.\n\n\n> No. It enlarges PRPC by 8 bytes, but changing lwWaiting to int8/uint8\n> keeps the size as it is. (Rocky8/x86-64)\n\nI agree\n\n> It just shaves off looping cycles. So +1 for what the patch does.\n>\n>\n> > client patched HEAD\n> > 1 60109 60174\n> > 2 112694 116169\n> > 4 214287 208119\n> > 8 377459 373685\n> > 16 524132 515247\n> > 32 565772 554726\n> > 64 587716 497508\n> > 128 581297 415097\n> > 256 550296 334923\n> > 512 486207 243679\n> > 768 449673 192959\n> > 1024 410836 157734\n> > 2048 326224 82904\n> > 4096 250252 32007\n> >\n> > Not perfect with the patch, but not awful either.\n>\n> Fairly good? Agreed. The performance peak is improved by 6% and\n> shifted to larger number of clients (32->128).\n>\n\nThe performance result is promising so +1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 14:11:47 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 10:29 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> But I think we can solve that fairly reasonably nonetheless. We can change\n> PGPROC->lwWaiting to not just be a boolean, but have three states:\n> 0: not waiting\n> 1: waiting in waitlist\n> 2: waiting to be woken up\n>\n> which we then can use in LWLockDequeueSelf() to only remove ourselves from the\n> list if we're on it. As removal from that list is protected by the wait list\n> lock, there's no race to worry about.\n>\n> client patched HEAD\n> 1 60109 60174\n> 2 112694 116169\n> 4 214287 208119\n> 8 377459 373685\n> 16 524132 515247\n> 32 565772 554726\n> 64 587716 497508\n> 128 581297 415097\n> 256 550296 334923\n> 512 486207 243679\n> 768 449673 192959\n> 1024 410836 157734\n> 2048 326224 82904\n> 4096 250252 32007\n>\n> Not perfect with the patch, but not awful either.\n\nHere are results from my testing [1]. Results look impressive with the\npatch at a higher number of clients, for instance, on HEAD TPS with\n1024 clients is 103587 whereas it is 248702 with the patch.\n\nHEAD, run 1:\n1 34534\n2 72088\n4 135249\n8 213045\n16 243507\n32 304108\n64 375148\n128 390658\n256 345503\n512 284510\n768 146417\n1024 103587\n2048 34702\n4096 12450\n\nHEAD, run 2:\n1 34110\n2 72403\n4 134421\n8 211263\n16 241606\n32 295198\n64 353580\n128 385147\n256 341672\n512 295001\n768 142341\n1024 97721\n2048 30229\n4096 13179\n\nPATCHED, run 1:\n1 34412\n2 71733\n4 139141\n8 211526\n16 241692\n32 308198\n64 406198\n128 385643\n256 338464\n512 295559\n768 272639\n1024 248702\n2048 191402\n4096 112074\n\nPATCHED, run 2:\n1 34087\n2 73567\n4 135624\n8 211901\n16 242819\n32 310534\n64 352663\n128 381780\n256 342483\n512 301968\n768 272596\n1024 251014\n2048 184939\n4096 108186\n\n> I've attached my quick-and-dirty patch. Obviously it'd need a few defines etc,\n> but I wanted to get this out to discuss before spending further time.\n\nJust for the record, here are some review comments posted in the other\nthread - https://www.postgresql.org/message-id/CALj2ACXktNbG%3DK8Xi7PSqbofTZozavhaxjatVc14iYaLu4Maag%40mail.gmail.com..\n\nBTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\nwith the same set up [1], I'm not sure if it's really because of the\npatch. I'm unable to reproduce it now and unfortunately I didn't\ncapture further details when it occurred.\n\n[1] ./configure --prefix=$PWD/inst/ --enable-tap-tests CFLAGS=\"-O3\" >\ninstall.log && make -j 8 install > install.log 2>&1 &\nshared_buffers = 8GB\nmax_wal_size = 32GB\nmax_connections = 4096\ncheckpoint_timeout = 10min\n\nubuntu: cat << EOF >> txid.sql\nSELECT txid_current();\nEOF\nubuntu: for c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do\necho -n \"$c \";./pgbench -n -M prepared -U ubuntu postgres -f txid.sql\n-c$c -j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:21:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "I was working on optimizing the LWLock queue in a little different way\nand I also did a benchmarking of Andres' original patch from this\nthread. [1]\nThe results are quite impressive, indeed. Please feel free to see the\nresults and join the discussion in [1] if you want.\n\nBest regards,\nPavel\n\n[1] https://www.postgresql.org/message-id/flat/CALT9ZEEz%2B%3DNepc5eti6x531q64Z6%2BDxtP3h-h_8O5HDdtkJcPw%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 15:09:35 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi Andres,\n\nThank you for your patch. The results are impressive.\n\nOn Mon, Oct 31, 2022 at 2:10 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> I was working on optimizing the LWLock queue in a little different way\n> and I also did a benchmarking of Andres' original patch from this\n> thread. [1]\n> The results are quite impressive, indeed. Please feel free to see the\n> results and join the discussion in [1] if you want.\n>\n> Best regards,\n> Pavel\n>\n> [1] https://www.postgresql.org/message-id/flat/CALT9ZEEz%2B%3DNepc5eti6x531q64Z6%2BDxtP3h-h_8O5HDdtkJcPw%40mail.gmail.com\n\nPavel posted a patch implementing a lock-less queue for LWLock. The\nresults are interesting indeed, but slightly lower than your current\npatch have. The current Pavel's patch probably doesn't utilize the\nfull potential of lock-less idea. I wonder what do you think about\nthis direction? We would be grateful for your guidance. Thank you.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 31 Oct 2022 15:38:46 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Thu, Oct 27, 2022 at 12:59 PM Andres Freund <andres@anarazel.de> wrote:\n> After pulling my\n> hair out for quite a while to try to understand that behaviour, I figured out\n> that it's just a side-effect of *removing* some other contention.\n\nI've seen this kind of pattern on multiple occasions. I don't know if\nthey were all caused by this, or what, but I certainly like the idea\nof making it better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 14:40:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> with the same set up [1], I'm not sure if it's really because of the\n> patch. I'm unable to reproduce it now and unfortunately I didn't\n> capture further details when it occurred.\n\nThat's likely because the prototype patch I submitted in this thread missed\nupdating LWLockUpdateVar().\n\nUpdated patch attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 31 Oct 2022 16:51:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 4:51 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> > BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> > with the same set up [1], I'm not sure if it's really because of the\n> > patch. I'm unable to reproduce it now and unfortunately I didn't\n> > capture further details when it occurred.\n>\n> That's likely because the prototype patch I submitted in this thread missed\n> updating LWLockUpdateVar().\n>\n> Updated patch attached.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi,\nMinor comment:\n\n+ uint8 lwWaiting; /* see LWLockWaitState */\n\nWhy not declare `lwWaiting` of type LWLockWaitState ?\n\nCheers\n\nOn Mon, Oct 31, 2022 at 4:51 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> with the same set up [1], I'm not sure if it's really because of the\n> patch. I'm unable to reproduce it now and unfortunately I didn't\n> capture further details when it occurred.\n\nThat's likely because the prototype patch I submitted in this thread missed\nupdating LWLockUpdateVar().\n\nUpdated patch attached.\n\nGreetings,\n\nAndres FreundHi,Minor comment:+ uint8 lwWaiting; /* see LWLockWaitState */Why not declare `lwWaiting` of type LWLockWaitState ?Cheers",
"msg_date": "Mon, 31 Oct 2022 17:17:03 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 17:17:03 -0700, Zhihong Yu wrote:\n> On Mon, Oct 31, 2022 at 4:51 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> > > BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> > > with the same set up [1], I'm not sure if it's really because of the\n> > > patch. I'm unable to reproduce it now and unfortunately I didn't\n> > > capture further details when it occurred.\n> >\n> > That's likely because the prototype patch I submitted in this thread missed\n> > updating LWLockUpdateVar().\n> >\n> > Updated patch attached.\n> >\n> > Greetings,\n> >\n> > Andres Freund\n> >\n> \n> Hi,\n> Minor comment:\n> \n> + uint8 lwWaiting; /* see LWLockWaitState */\n> \n> Why not declare `lwWaiting` of type LWLockWaitState ?\n\nUnfortunately C99 (*) doesn't allow to specify the width of an enum\nfield. With most compilers we'd end up using 4 bytes.\n\nGreetings,\n\nAndres Freund\n\n(*) C++ has allowed specifying this for quite a few years now and I think C23\nwill support it too, but that doesn't help us at this point.\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:19:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 5:19 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-10-31 17:17:03 -0700, Zhihong Yu wrote:\n> > On Mon, Oct 31, 2022 at 4:51 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> >\n> > > Hi,\n> > >\n> > > On 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> > > > BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> > > > with the same set up [1], I'm not sure if it's really because of the\n> > > > patch. I'm unable to reproduce it now and unfortunately I didn't\n> > > > capture further details when it occurred.\n> > >\n> > > That's likely because the prototype patch I submitted in this thread\n> missed\n> > > updating LWLockUpdateVar().\n> > >\n> > > Updated patch attached.\n> > >\n> > > Greetings,\n> > >\n> > > Andres Freund\n> > >\n> >\n> > Hi,\n> > Minor comment:\n> >\n> > + uint8 lwWaiting; /* see LWLockWaitState */\n> >\n> > Why not declare `lwWaiting` of type LWLockWaitState ?\n>\n> Unfortunately C99 (*) doesn't allow to specify the width of an enum\n> field. With most compilers we'd end up using 4 bytes.\n>\n> Greetings,\n>\n> Andres Freund\n>\n> (*) C++ has allowed specifying this for quite a few years now and I think\n> C23\n> will support it too, but that doesn't help us at this point.\n>\n\nHi,\nThanks for the response.\n\nIf possible, it would be better to put your explanation in the code comment\n(so that other people know the reasoning).\n\nOn Mon, Oct 31, 2022 at 5:19 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-10-31 17:17:03 -0700, Zhihong Yu wrote:\n> On Mon, Oct 31, 2022 at 4:51 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> > > BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> > > with the same set up [1], I'm not sure if it's really because of the\n> > > patch. I'm unable to reproduce it now and unfortunately I didn't\n> > > capture further details when it occurred.\n> >\n> > That's likely because the prototype patch I submitted in this thread missed\n> > updating LWLockUpdateVar().\n> >\n> > Updated patch attached.\n> >\n> > Greetings,\n> >\n> > Andres Freund\n> >\n> \n> Hi,\n> Minor comment:\n> \n> + uint8 lwWaiting; /* see LWLockWaitState */\n> \n> Why not declare `lwWaiting` of type LWLockWaitState ?\n\nUnfortunately C99 (*) doesn't allow to specify the width of an enum\nfield. With most compilers we'd end up using 4 bytes.\n\nGreetings,\n\nAndres Freund\n\n(*) C++ has allowed specifying this for quite a few years now and I think C23\nwill support it too, but that doesn't help us at this point.Hi,Thanks for the response.If possible, it would be better to put your explanation in the code comment (so that other people know the reasoning).",
"msg_date": "Mon, 31 Oct 2022 18:00:16 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 5:21 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-10-31 16:21:06 +0530, Bharath Rupireddy wrote:\n> > BTW, I've seen a sporadic crash (SEGV) with the patch in bg writer\n> > with the same set up [1], I'm not sure if it's really because of the\n> > patch. I'm unable to reproduce it now and unfortunately I didn't\n> > capture further details when it occurred.\n>\n> That's likely because the prototype patch I submitted in this thread missed\n> updating LWLockUpdateVar().\n>\n> Updated patch attached.\n\nThanks. It looks good to me. However, some minor comments on the v3 patch:\n\n1.\n- if (MyProc->lwWaiting)\n+ if (MyProc->lwWaiting != LW_WS_NOT_WAITING)\n elog(PANIC, \"queueing for lock while waiting on another one\");\n\nCan the above condition be MyProc->lwWaiting == LW_WS_WAITING ||\nMyProc->lwWaiting == LW_WS_PENDING_WAKEUP for better readability?\n\nOr add an assertion Assert(MyProc->lwWaiting != LW_WS_WAITING &&\nMyProc->lwWaiting != LW_WS_PENDING_WAKEUP); before setting\nLW_WS_WAITING?\n\n2.\n /* Awaken any waiters I removed from the queue. */\n proclist_foreach_modify(iter, &wakeup, lwWaitLink)\n {\n\n@@ -1044,7 +1052,7 @@ LWLockWakeup(LWLock *lock)\n * another lock.\n */\n pg_write_barrier();\n- waiter->lwWaiting = false;\n+ waiter->lwWaiting = LW_WS_NOT_WAITING;\n PGSemaphoreUnlock(waiter->sem);\n }\n\n /*\n * Awaken any waiters I removed from the queue.\n */\n proclist_foreach_modify(iter, &wakeup, lwWaitLink)\n {\n PGPROC *waiter = GetPGProcByNumber(iter.cur);\n\n proclist_delete(&wakeup, iter.cur, lwWaitLink);\n /* check comment in LWLockWakeup() about this barrier */\n pg_write_barrier();\n waiter->lwWaiting = LW_WS_NOT_WAITING;\n\nCan we add an assertion Assert(waiter->lwWaiting ==\nLW_WS_PENDING_WAKEUP) in the above two places? We prepare the wakeup\nlist and set the LW_WS_NOT_WAITING flag in above loops, having an\nassertion is better here IMO.\n\nBelow are test results with v3 patch. +1 for back-patching it.\n\n HEAD PATCHED\n1 34142 34289\n2 72760 69720\n4 136300 131848\n8 210809 210192\n16 240718 242744\n32 297587 297354\n64 341939 343036\n128 383615 383801\n256 342094 337680\n512 263194 288629\n768 145526 261553\n1024 107267 241811\n2048 35716 188389\n4096 12415 120300\n\n PG15 PATCHED\n1 34503 34078\n2 73708 72054\n4 139415 133321\n8 212396 211390\n16 242227 242584\n32 303441 309288\n64 362680 339211\n128 378645 344291\n256 340016 344291\n512 290044 293337\n768 140277 264618\n1024 96191 247636\n2048 35158 181488\n4096 12164 118610\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 1 Nov 2022 12:46:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 3:17 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Below are test results with v3 patch. +1 for back-patching it.\n\nThe problem with back-patching stuff like this is that it can have\nunanticipated consequences. I think that the chances of something like\nthis backfiring are less than for a patch that changes plans, but I\ndon't think that they're nil, either. It could turn out that this\npatch, which has really promising results on the workloads we've\ntested, harms some other workload due to some other contention pattern\nwe can't foresee. It could also turn out that improving performance at\nthe database level actually has negative consequences for some\napplication using the database, because the application could be\nunknowingly relying on the database to throttle its activity.\n\nIt's hard for me to estimate exactly what the risk of a patch like\nthis is. I think that if we back-patched this, and only this, perhaps\nthe chances of something bad happening aren't incredibly high. But if\nwe get into the habit of back-patching seemingly-innocuous performance\nimprovements, it's only a matter of time before one of them turns out\nnot to be so innocuous as we thought. I would guess that the number of\ntimes we have to back-patch something like this before somebody starts\ncomplaining about a regression is likely to be somewhere between 3 and\n5.\n\nIt's possible that I'm too pessimistic, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:37:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On 11/1/22 8:37 AM, Robert Haas wrote:\r\n> On Tue, Nov 1, 2022 at 3:17 AM Bharath Rupireddy\r\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n>> Below are test results with v3 patch. +1 for back-patching it.\r\n\r\nFirst, awesome find and proposed solution!\r\n\r\n> The problem with back-patching stuff like this is that it can have\r\n> unanticipated consequences. I think that the chances of something like\r\n> this backfiring are less than for a patch that changes plans, but I\r\n> don't think that they're nil, either. It could turn out that this\r\n> patch, which has really promising results on the workloads we've\r\n> tested, harms some other workload due to some other contention pattern\r\n> we can't foresee. It could also turn out that improving performance at\r\n> the database level actually has negative consequences for some\r\n> application using the database, because the application could be\r\n> unknowingly relying on the database to throttle its activity.\r\n\r\nIf someone is using the database to throttle activity for their app, I \r\nhave a bunch of follow up questions to understand why.\r\n\r\n> It's hard for me to estimate exactly what the risk of a patch like\r\n> this is. I think that if we back-patched this, and only this, perhaps\r\n> the chances of something bad happening aren't incredibly high. But if\r\n> we get into the habit of back-patching seemingly-innocuous performance\r\n> improvements, it's only a matter of time before one of them turns out\r\n> not to be so innocuous as we thought. I would guess that the number of\r\n> times we have to back-patch something like this before somebody starts\r\n> complaining about a regression is likely to be somewhere between 3 and\r\n> 5.\r\n\r\nHaving the privilege of reading through the release notes for every \r\nupdate release, on average 1-2 \"performance improvements\" in each \r\nrelease. I believe they tend to be more negligible, though.\r\n\r\nI do understand the concerns. Say you discover your workload does have a \r\nregression with this patch and then there's a CVE that you want to \r\naccept -- what do you do? Reading the thread / patch, it seems as if \r\nthis is a lower risk \"performance fix\", but still nonzero.\r\n\r\nWhile this does affect all supported versions, we could also consider \r\nbackpatching only for PG15. That at least 1/ limits impact on users \r\nrunning older versions (opting into a major version upgrade) and 2/ \r\nwe're still very early in the major upgrade cycle for PG15 that it's \r\nlower risk if there are issues.\r\n\r\nUsers are generally happy when they can perform a simple upgrade and get \r\na performance boost, particularly the set of users that this patch \r\naffects most (high throughput, high connection count). This is the type \r\nof fix that would make headlines in a major release announcement (10x \r\nTPS improvement w/4096 connections?!). That is also part of the tradeoff \r\nof backpatching this, is that we may lose some of the higher visibility \r\nmarketing opportunities to discuss this (though I'm sure there will be \r\nplenty of blog posts, etc.)\r\n\r\nAndres: when you suggested backpatching, were you thinking of the Nov \r\n2022 release or the Feb 2023 release?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 1 Nov 2022 11:19:02 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 08:37:39 -0400, Robert Haas wrote:\n> On Tue, Nov 1, 2022 at 3:17 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Below are test results with v3 patch. +1 for back-patching it.\n> \n> The problem with back-patching stuff like this is that it can have\n> unanticipated consequences. I think that the chances of something like\n> this backfiring are less than for a patch that changes plans, but I\n> don't think that they're nil, either. It could turn out that this\n> patch, which has really promising results on the workloads we've\n> tested, harms some other workload due to some other contention pattern\n> we can't foresee. It could also turn out that improving performance at\n> the database level actually has negative consequences for some\n> application using the database, because the application could be\n> unknowingly relying on the database to throttle its activity.\n> \n> It's hard for me to estimate exactly what the risk of a patch like\n> this is. I think that if we back-patched this, and only this, perhaps\n> the chances of something bad happening aren't incredibly high. But if\n> we get into the habit of back-patching seemingly-innocuous performance\n> improvements, it's only a matter of time before one of them turns out\n> not to be so innocuous as we thought. I would guess that the number of\n> times we have to back-patch something like this before somebody starts\n> complaining about a regression is likely to be somewhere between 3 and\n> 5.\n\nIn general I agree, we shouldn't default to backpatching performance\nfixes. The reason I am even considering it in this case, is that it's a\nreadily reproducible issue, leading to a quadratic behaviour that's extremely\nhard to pinpoint. There's no increase in CPU usage, no wait event for\nspinlocks, the system doesn't even get stuck (because the wait list lock is\nheld after the lwlock lock release). I don't think users have a decent chance\nat figuring out that this is the issue.\n\nI'm not at all convinced we should backpatch either, just to be clear.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:59:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 11:19:02 -0400, Jonathan S. Katz wrote:\n> This is the type of fix that would make headlines in a major release\n> announcement (10x TPS improvement w/4096 connections?!). That is also part\n> of the tradeoff of backpatching this, is that we may lose some of the higher\n> visibility marketing opportunities to discuss this (though I'm sure there\n> will be plenty of blog posts, etc.)\n\n(read the next paragraph with the caveat that results below prove it somewhat\nwrong)\n\nI don't think the fix is as big a deal as the above make it sound - you need\nto do somewhat extreme things to hit the problem. Yes, it drastically improves\nthe scalability of e.g. doing SELECT txid_current() across as many sessions as\npossible - but that's not something you normally do (it was a good candidate\nto show the problem because it's a single lock but doesn't trigger WAL flushes\nat commit).\n\nYou can probably hit the problem with many concurrent single-tx INSERTs, but\nyou'd need to have synchronous_commit=off or fsync=off (or a very expensive\nserver class SSD with battery backup) and the effect is likely smaller.\n\n\n> Andres: when you suggested backpatching, were you thinking of the Nov 2022\n> release or the Feb 2023 release?\n\nI wasn't thinking that concretely. Even if we decide to backpatch, I'd be very\nhesitant to do it in a few days.\n\n\n<goes and runs test while in meeting>\n\n\nI tested with browser etc running, so this is plenty noisy. I used the best of\nthe two pgbench -T21 -P5 tps, after ignoring the first two periods (they're\ntoo noisy). I used an ok-ish NVMe SSD, rather than the the expensive one that\nhas \"free\" fsync.\n\nsynchronous_commit=on:\n\nclients master\t fix\n16 6196 6202\n64 25716 25545\n256 90131 90240\n1024 128556 151487\n2048 59417 157050\n4096 32252 178823\n\n\nsynchronous_commit=off:\n\nclients master\t fix\n16 409828\t 409016\n64 454257 455804\n256 304175 452160\n1024 135081 334979\n2048 66124 291582\n4096 27019 245701\n\n\nHm. That's a bigger effect than I anticipated. I guess sc=off isn't actually\nrequired, due to the level of concurrency making group commit very\neffective.\n\nThis is without an index, serial column or anything. But a quick comparison\nfor just 4096 clients shows that to still be a big difference if I create an\nserial primary key:\nmaster: 26172\nfix: 155813\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 10:41:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On 11/1/22 1:41 PM, Andres Freund wrote:\r\n\r\n>> Andres: when you suggested backpatching, were you thinking of the Nov 2022\r\n>> release or the Feb 2023 release?\r\n> \r\n> I wasn't thinking that concretely. Even if we decide to backpatch, I'd be very\r\n> hesitant to do it in a few days.\r\n\r\nYeah this was my thinking (and also why I took a few days to reply given \r\nthe lack of urgency for this release). It would at least give some more \r\ntime for others to test it to feel confident that we're not introducing \r\nnoticeable regressions.\r\n\r\n> <goes and runs test while in meeting>\r\n> \r\n> \r\n> I tested with browser etc running, so this is plenty noisy. I used the best of\r\n> the two pgbench -T21 -P5 tps, after ignoring the first two periods (they're\r\n> too noisy). I used an ok-ish NVMe SSD, rather than the the expensive one that\r\n> has \"free\" fsync.\r\n> \r\n> synchronous_commit=on:\r\n> \r\n> clients master\t fix\r\n> 16 6196 6202\r\n> 64 25716 25545\r\n> 256 90131 90240\r\n> 1024 128556 151487\r\n> 2048 59417 157050\r\n> 4096 32252 178823\r\n> \r\n> \r\n> synchronous_commit=off:\r\n> \r\n> clients master\t fix\r\n> 16 409828\t 409016\r\n> 64 454257 455804\r\n> 256 304175 452160\r\n> 1024 135081 334979\r\n> 2048 66124 291582\r\n> 4096 27019 245701\r\n> \r\n> \r\n> Hm. That's a bigger effect than I anticipated. I guess sc=off isn't actually\r\n> required, due to the level of concurrency making group commit very\r\n> effective.\r\n> \r\n> This is without an index, serial column or anything. But a quick comparison\r\n> for just 4096 clients shows that to still be a big difference if I create an\r\n> serial primary key:\r\n> master: 26172\r\n> fix: 155813\r\n\r\n🤯 (seeing if my exploding head makes it into the archives).\r\n\r\nGiven the lack of ABI changes (hesitant to say low-risk until after more \r\ntesting, but seemingly low-risk), I can get behind backpatching esp if \r\nwe're targeting Feb 2023 so we can tests some more.\r\n\r\nWith my advocacy hat on, it bums me that we may not get as much buzz \r\nabout this change given it's not in a major release, but 1/ it'll fix an \r\nissue that will help users with high-concurrency and 2/ users would be \r\nable to perform a simpler update to get the change.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 3 Nov 2022 14:21:18 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 12:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > Updated patch attached.\n>\n> Thanks. It looks good to me. However, some minor comments on the v3 patch:\n>\n> 1.\n> - if (MyProc->lwWaiting)\n> + if (MyProc->lwWaiting != LW_WS_NOT_WAITING)\n> elog(PANIC, \"queueing for lock while waiting on another one\");\n>\n> Can the above condition be MyProc->lwWaiting == LW_WS_WAITING ||\n> MyProc->lwWaiting == LW_WS_PENDING_WAKEUP for better readability?\n>\n> Or add an assertion Assert(MyProc->lwWaiting != LW_WS_WAITING &&\n> MyProc->lwWaiting != LW_WS_PENDING_WAKEUP); before setting\n> LW_WS_WAITING?\n>\n> 2.\n> /* Awaken any waiters I removed from the queue. */\n> proclist_foreach_modify(iter, &wakeup, lwWaitLink)\n> {\n>\n> @@ -1044,7 +1052,7 @@ LWLockWakeup(LWLock *lock)\n> * another lock.\n> */\n> pg_write_barrier();\n> - waiter->lwWaiting = false;\n> + waiter->lwWaiting = LW_WS_NOT_WAITING;\n> PGSemaphoreUnlock(waiter->sem);\n> }\n>\n> /*\n> * Awaken any waiters I removed from the queue.\n> */\n> proclist_foreach_modify(iter, &wakeup, lwWaitLink)\n> {\n> PGPROC *waiter = GetPGProcByNumber(iter.cur);\n>\n> proclist_delete(&wakeup, iter.cur, lwWaitLink);\n> /* check comment in LWLockWakeup() about this barrier */\n> pg_write_barrier();\n> waiter->lwWaiting = LW_WS_NOT_WAITING;\n>\n> Can we add an assertion Assert(waiter->lwWaiting ==\n> LW_WS_PENDING_WAKEUP) in the above two places? We prepare the wakeup\n> list and set the LW_WS_NOT_WAITING flag in above loops, having an\n> assertion is better here IMO.\n>\n> Below are test results with v3 patch. +1 for back-patching it.\n>\n> HEAD PATCHED\n> 1 34142 34289\n> 2 72760 69720\n> 4 136300 131848\n> 8 210809 210192\n> 16 240718 242744\n> 32 297587 297354\n> 64 341939 343036\n> 128 383615 383801\n> 256 342094 337680\n> 512 263194 288629\n> 768 145526 261553\n> 1024 107267 241811\n> 2048 35716 188389\n> 4096 12415 120300\n>\n> PG15 PATCHED\n> 1 34503 34078\n> 2 73708 72054\n> 4 139415 133321\n> 8 212396 211390\n> 16 242227 242584\n> 32 303441 309288\n> 64 362680 339211\n> 128 378645 344291\n> 256 340016 344291\n> 512 290044 293337\n> 768 140277 264618\n> 1024 96191 247636\n> 2048 35158 181488\n> 4096 12164 118610\n\nI looked at the v3 patch again today and ran some performance tests.\nThe results look impressive as they were earlier. Andres, any plans to\nget this in?\n\npgbench with SELECT txid_current();:\nClients HEAD PATCHED\n1 34613 33611\n2 72634 70546\n4 137885 136911\n8 216470 216076\n16 242535 245392\n32 299952 304740\n64 329788 347401\n128 378296 386873\n256 344939 343832\n512 292196 295839\n768 144212 260102\n1024 101525 250263\n2048 35594 185878\n4096 11842 104227\n\npgbench with insert into pgbench_accounts table:\nClients HEAD PATCHED\n1 1660 1600\n2 1848 1746\n4 3547 3395\n8 7330 6754\n16 13103 13613\n32 26011 26372\n64 52331 52594\n128 93313 95526\n256 127373 126182\n512 126712 127857\n768 116765 119227\n1024 111464 112499\n2048 58838 92756\n4096 26066 60543\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 9 Nov 2022 15:54:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-09 15:54:16 +0530, Bharath Rupireddy wrote:\n> On Tue, Nov 1, 2022 at 12:46 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > Updated patch attached.\n> >\n> > Thanks. It looks good to me. However, some minor comments on the v3 patch:\n> >\n> > 1.\n> > - if (MyProc->lwWaiting)\n> > + if (MyProc->lwWaiting != LW_WS_NOT_WAITING)\n> > elog(PANIC, \"queueing for lock while waiting on another one\");\n> >\n> > Can the above condition be MyProc->lwWaiting == LW_WS_WAITING ||\n> > MyProc->lwWaiting == LW_WS_PENDING_WAKEUP for better readability?\n> >\n> > Or add an assertion Assert(MyProc->lwWaiting != LW_WS_WAITING &&\n> > MyProc->lwWaiting != LW_WS_PENDING_WAKEUP); before setting\n> > LW_WS_WAITING?\n\nI don't think that's a good idea - it'll just mean we have to modify more\nplaces if we add another state, without making anything more robust.\n\n\n> > 2.\n> > /* Awaken any waiters I removed from the queue. */\n> > proclist_foreach_modify(iter, &wakeup, lwWaitLink)\n> > {\n> >\n> > @@ -1044,7 +1052,7 @@ LWLockWakeup(LWLock *lock)\n> > * another lock.\n> > */\n> > pg_write_barrier();\n> > - waiter->lwWaiting = false;\n> > + waiter->lwWaiting = LW_WS_NOT_WAITING;\n> > PGSemaphoreUnlock(waiter->sem);\n> > }\n> >\n> > /*\n> > * Awaken any waiters I removed from the queue.\n> > */\n> > proclist_foreach_modify(iter, &wakeup, lwWaitLink)\n> > {\n> > PGPROC *waiter = GetPGProcByNumber(iter.cur);\n> >\n> > proclist_delete(&wakeup, iter.cur, lwWaitLink);\n> > /* check comment in LWLockWakeup() about this barrier */\n> > pg_write_barrier();\n> > waiter->lwWaiting = LW_WS_NOT_WAITING;\n> >\n> > Can we add an assertion Assert(waiter->lwWaiting ==\n> > LW_WS_PENDING_WAKEUP) in the above two places? We prepare the wakeup\n> > list and set the LW_WS_NOT_WAITING flag in above loops, having an\n> > assertion is better here IMO.\n\nI guess it can't hurt - but it's not really related to the changes in the\npatch, no?\n\n\n> I looked at the v3 patch again today and ran some performance tests.\n> The results look impressive as they were earlier. Andres, any plans to\n> get this in?\n\nI definitely didn't want to backpatch before this point release. But it seems\nwe haven't quite got to an agreement what to do about backpatching. It's\nprobably best to just commit it to HEAD and let the backpatch discussion\nhappen concurrently.\n\nI'm on a hike, without any connectivity, Thu afternoon - Sun. I think it's OK\nto push it to HEAD if I get it done in the next few hours. Bigger issues,\nwhich I do not expect, should show up before tomorrow afternoon. Smaller\nthings could wait till Sunday if necessary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Nov 2022 09:38:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On 2022-11-09 09:38:08 -0800, Andres Freund wrote:\n> I'm on a hike, without any connectivity, Thu afternoon - Sun. I think it's OK\n> to push it to HEAD if I get it done in the next few hours. Bigger issues,\n> which I do not expect, should show up before tomorrow afternoon. Smaller\n> things could wait till Sunday if necessary.\n\nI didn't get to it in time, so I'll leave it for when I'm back.\n\n\n",
"msg_date": "Wed, 9 Nov 2022 17:03:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-09 17:03:13 -0800, Andres Freund wrote:\n> On 2022-11-09 09:38:08 -0800, Andres Freund wrote:\n> > I'm on a hike, without any connectivity, Thu afternoon - Sun. I think it's OK\n> > to push it to HEAD if I get it done in the next few hours. Bigger issues,\n> > which I do not expect, should show up before tomorrow afternoon. Smaller\n> > things could wait till Sunday if necessary.\n> \n> I didn't get to it in time, so I'll leave it for when I'm back.\n\nTook a few days longer, partially because I encountered an independent issue\n(see 8c954168cff) while testing.\n\nI pushed it to HEAD now.\n\nI still think it might be worth to backpatch in a bit, but so far the votes on\nthat weren't clear enough on that to feel comfortable.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Sun, 20 Nov 2022 11:56:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On 11/20/22 2:56 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-11-09 17:03:13 -0800, Andres Freund wrote:\r\n>> On 2022-11-09 09:38:08 -0800, Andres Freund wrote:\r\n>>> I'm on a hike, without any connectivity, Thu afternoon - Sun. I think it's OK\r\n>>> to push it to HEAD if I get it done in the next few hours. Bigger issues,\r\n>>> which I do not expect, should show up before tomorrow afternoon. Smaller\r\n>>> things could wait till Sunday if necessary.\r\n>>\r\n>> I didn't get to it in time, so I'll leave it for when I'm back.\r\n> \r\n> Took a few days longer, partially because I encountered an independent issue\r\n> (see 8c954168cff) while testing.\r\n> \r\n> I pushed it to HEAD now.\r\n\r\nThanks!\r\n\r\n> I still think it might be worth to backpatch in a bit, but so far the votes on\r\n> that weren't clear enough on that to feel comfortable.\r\n\r\nMy general feeling is \"yes\" on backpatching, particularly if this is a \r\nbug and it's fixable without ABI breaks.\r\n\r\nMy comments were around performing additional workload benchmarking just \r\nto ensure people feel comfortable that we're not introducing any \r\nperformance regressions, and to consider the Feb 2023 release as the \r\ntime to introduce this (vs. Nov 2022). That gives us ample time to \r\ndetermine if there are any performance regressions introduced.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 21 Nov 2022 10:31:14 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 10:31:14AM -0500, Jonathan S. Katz wrote:\n> On 11/20/22 2:56 PM, Andres Freund wrote:\n>> I still think it might be worth to backpatch in a bit, but so far the votes on\n>> that weren't clear enough on that to feel comfortable.\n> \n> My general feeling is \"yes\" on backpatching, particularly if this is a bug\n> and it's fixable without ABI breaks.\n\nNow that commit a4adc31 has had some time to bake and concerns about\nunintended consequences may have abated, I wanted to revive this\nback-patching discussion. I see a few possibly-related reports [0] [1]\n[2], and I'm now seeing this in the field, too. While it is debatable\nwhether this is a bug, it's a quite nasty issue for users, and it's both\ndifficult to detect and difficult to work around.\n\nThoughts?\n\n[0] https://postgr.es/m/CAM527d-uDn5osa6QPKxHAC6srOfBH3M8iXUM%3DewqHV6n%3Dw1u8Q%40mail.gmail.com\n[1] https://postgr.es/m/VI1PR05MB620666631A41186ACC3FC91ACFC70%40VI1PR05MB6206.eurprd05.prod.outlook.com\n[2] https://postgr.es/m/dd0e070809430a31f7ddd8483fbcce59%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Jan 2024 21:17:47 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 09:17:47PM -0600, Nathan Bossart wrote:\n> Now that commit a4adc31 has had some time to bake and concerns about\n> unintended consequences may have abated, I wanted to revive this\n> back-patching discussion. I see a few possibly-related reports [0] [1]\n> [2], and I'm now seeing this in the field, too. While it is debatable\n> whether this is a bug, it's a quite nasty issue for users, and it's both\n> difficult to detect and difficult to work around.\n\n+1, I've seen this becoming a PITA for a few things. Knowing that the\nsize of PGPROC does not change at all, I would be in favor for a\nbackpatch, especially since it's been in the tree for more than 1\nyear, and even more knowing that we have 16 released with this stuff\nin.\n--\nMichael",
"msg_date": "Thu, 11 Jan 2024 12:45:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On 1/10/24 10:45 PM, Michael Paquier wrote:\r\n> On Wed, Jan 10, 2024 at 09:17:47PM -0600, Nathan Bossart wrote:\r\n>> Now that commit a4adc31 has had some time to bake and concerns about\r\n>> unintended consequences may have abated, I wanted to revive this\r\n>> back-patching discussion. I see a few possibly-related reports [0] [1]\r\n>> [2], and I'm now seeing this in the field, too. While it is debatable\r\n>> whether this is a bug, it's a quite nasty issue for users, and it's both\r\n>> difficult to detect and difficult to work around.\r\n> \r\n> +1, I've seen this becoming a PITA for a few things. Knowing that the\r\n> size of PGPROC does not change at all, I would be in favor for a\r\n> backpatch, especially since it's been in the tree for more than 1\r\n> year, and even more knowing that we have 16 released with this stuff\r\n> in.\r\n\r\nI have similar data sources to Nathan/Michael and I'm trying to avoid \r\npiling on, but one case that's interesting occurred after a major \r\nversion upgrade from PG10 to PG14 on a database supporting a very \r\nactive/highly concurrent workload. On inspection, it seems like \r\nbackpatching would help this particularly case.\r\n\r\nWith 10/11 EOL, I do wonder if we'll see more of these reports on \r\nupgrade to < PG16.\r\n\r\n(I was in favor of backpatching prior; opinion is unchanged).\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 11 Jan 2024 09:47:33 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 09:47:33AM -0500, Jonathan S. Katz wrote:\n> I have similar data sources to Nathan/Michael and I'm trying to avoid piling\n> on, but one case that's interesting occurred after a major version upgrade\n> from PG10 to PG14 on a database supporting a very active/highly concurrent\n> workload. On inspection, it seems like backpatching would help this\n> particularly case.\n> \n> With 10/11 EOL, I do wonder if we'll see more of these reports on upgrade to\n> < PG16.\n> \n> (I was in favor of backpatching prior; opinion is unchanged).\n\nHearing nothing, I have prepared a set of patches for v12~v15,\nchecking all the lwlock paths for all the branches. At the end the\nset of changes look rather sane to me regarding the queue handlings.\n\nI have also run some numbers on all the branches, and the test case\nposted upthread falls off dramatically after 512 concurrent\nconnections at the top of all the stable branches :(\n\nFor example on REL_12_STABLE with and without the patch attached:\nnum v12 v12+patch\n1 29717.151665 29096.707588\n2 63257.709301 61889.476318\n4 127921.873393 124575.901330\n8 231400.571662 230562.725174\n16 343911.185351 312432.897015\n32 291748.985280 281011.787701\n64 268998.728648 269975.605115\n128 297332.597018 286449.176950\n256 243902.817657 240559.122309 \n512 190069.602270 194510.718508\n768 58915.650225 165714.707198\n1024 39920.950552 149433.836901\n2048 16922.391688 108164.301054\n4096 6229.063321 69032.338708\n\nI'd like to apply that, just let me know if you have any comments\nand/or objections.\n--\nMichael",
"msg_date": "Tue, 16 Jan 2024 15:11:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On 1/16/24 1:11 AM, Michael Paquier wrote:\r\n> On Thu, Jan 11, 2024 at 09:47:33AM -0500, Jonathan S. Katz wrote:\r\n>> I have similar data sources to Nathan/Michael and I'm trying to avoid piling\r\n>> on, but one case that's interesting occurred after a major version upgrade\r\n>> from PG10 to PG14 on a database supporting a very active/highly concurrent\r\n>> workload. On inspection, it seems like backpatching would help this\r\n>> particularly case.\r\n>>\r\n>> With 10/11 EOL, I do wonder if we'll see more of these reports on upgrade to\r\n>> < PG16.\r\n>>\r\n>> (I was in favor of backpatching prior; opinion is unchanged).\r\n> \r\n> Hearing nothing, I have prepared a set of patches for v12~v15,\r\n> checking all the lwlock paths for all the branches. At the end the\r\n> set of changes look rather sane to me regarding the queue handlings.\r\n> \r\n> I have also run some numbers on all the branches, and the test case\r\n> posted upthread falls off dramatically after 512 concurrent\r\n> connections at the top of all the stable branches :(\r\n> \r\n> For example on REL_12_STABLE with and without the patch attached:\r\n> num v12 v12+patch\r\n> 1 29717.151665 29096.707588\r\n> 2 63257.709301 61889.476318\r\n> 4 127921.873393 124575.901330\r\n> 8 231400.571662 230562.725174\r\n> 16 343911.185351 312432.897015\r\n> 32 291748.985280 281011.787701\r\n> 64 268998.728648 269975.605115\r\n> 128 297332.597018 286449.176950\r\n> 256 243902.817657 240559.122309\r\n> 512 190069.602270 194510.718508\r\n> 768 58915.650225 165714.707198\r\n> 1024 39920.950552 149433.836901\r\n> 2048 16922.391688 108164.301054\r\n> 4096 6229.063321 69032.338708\r\n> \r\n> I'd like to apply that, just let me know if you have any comments\r\n> and/or objections.\r\n\r\nWow. All I can say is that my opinion remains unchanged on going forward \r\nwith backpatching.\r\n\r\nLooking at the code, I understand an argument for not backpatching given \r\nwe modify the struct, but this does seem low-risk/high-reward and should \r\nhelp PostgreSQL to run better on this higher throughput workloads.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 16 Jan 2024 23:24:49 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 11:24:49PM -0500, Jonathan S. Katz wrote:\n> On 1/16/24 1:11 AM, Michael Paquier wrote:\n>> I'd like to apply that, just let me know if you have any comments\n>> and/or objections.\n> \n> Looking at the code, I understand an argument for not backpatching given we\n> modify the struct, but this does seem low-risk/high-reward and should help\n> PostgreSQL to run better on this higher throughput workloads.\n\nJust to be clear here. I have repeated tests on all the stable\nbranches yesterday, and the TPS falls off drastically around 256\nconcurrent sessions for all of them with patterns similar to what I've\nposted for 12, getting back a lot of performance for the cases with\nmore than 1k connections.\n--\nMichael",
"msg_date": "Wed, 17 Jan 2024 15:19:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 03:11:48PM +0900, Michael Paquier wrote:\n> I'd like to apply that, just let me know if you have any comments\n> and/or objections.\n\nAnd done on 12~15.\n\nWhile on it, I have also looked at source code references on github\nand debian that involve lwWaiting, and all of them rely on lwWaiting\nwhen not waiting, making LW_WS_NOT_WAITING an equivalent.\n--\nMichael",
"msg_date": "Thu, 18 Jan 2024 15:17:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 7:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 16, 2024 at 03:11:48PM +0900, Michael Paquier wrote:\n> > I'd like to apply that, just let me know if you have any comments\n> > and/or objections.\n>\n> And done on 12~15.\n\nHi Michael, just to reassure you that it is a good thing. We have a\ncustomer who reported much better performance on 16.x than on 13~15 in\nvery heavy duty LWLock/lockmanager scenarios (ofc, before that was\ncommitted/released), so I gave it a try here today to see how much can\nbe attributed to that single commit.\n\nGiven:\n# $s=10, $p=10,100, DURATION=10s, m=prepared,simple, no reruns , just\nsingle $DURATION run to save time\npgbench -i -s $s --partitions $p $DBNAME\nALTER TABLE pgbench_accounts ADD COLUMN aid_parent INT;\nUPDATE pgbench_accounts SET aid_parent = aid\nCREATE INDEX ON pgbench_accounts(aid_parent)\n\npgbench -n -M $m -T $DURATION -c $c -j $c -f join.sql $DBNAME\n\njoin.sql was:\n\\set aid random(1, 100000 * :scale)\nselect * from pgbench_accounts pa join pgbench_branches pb on pa.bid =\npb.bid where pa.aid_parent = :aid;\n\nsee attached results.The benefits are observable (at least when active\nworking sessions >= VCPUs [threads not cores]) and give up to ~2.65x\nboost in certain cases at least for this testcase. Hopefully others\nwill find it useful.\n\n-J.",
"msg_date": "Fri, 19 Jan 2024 13:49:59 +0100",
"msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 01:49:59PM +0100, Jakub Wartak wrote:\n> Hi Michael, just to reassure you that it is a good thing. We have a\n> customer who reported much better performance on 16.x than on 13~15 in\n> very heavy duty LWLock/lockmanager scenarios (ofc, before that was\n> committed/released), so I gave it a try here today to see how much can\n> be attributed to that single commit.\n\nAhh. Thanks for the feedback.\n--\nMichael",
"msg_date": "Mon, 22 Jan 2024 17:38:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heavily contended lwlocks with long wait queues scale badly"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have fallback code for computers that don't have 32 bit atomic ops.\nOf course all modern ISAs have 32 bit atomics, but various comments\nimagine that a new architecture might be born that we don't have\nsupport for yet, so the fallback provides a way to bring a new system\nup by implementing only the spinlock operations and emulating the\nrest. This seems pretty strange to me: by the time someone brings an\nSMP kernel up on a hypothetical new architecture and gets around to\nporting relational databases, it's hard to imagine that the compiler\nbuiltins and C11 atomic support wouldn't be working.\n\nI suppose this could be considered in the spirit of recent cleanup of\nobsolete code in v16. The specific reason I'm interested is that I\nhave a couple of different experimental patches in development that\nwould like to use atomic ops from a signal handler, which is against\nthe law if they're emulated with spinlocks due to self-deadlock. Not\nsure if it's really a blocker, I can surely find some way to code\naround the limitation (I want to collapse a lot of flags into a single\nword and set them with fetch_or), but it seemed a little weird to have\nto do so for such an unlikely hypothetical consideration.\n\n(64 bit atomics are another matter, real hardware exists that doesn't\nhave them.)\n\nNo patch yet, just running a flame-proof flag up the poll before\ninvesting effort...\n\n\n",
"msg_date": "Fri, 28 Oct 2022 11:42:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Requiring 32 bit atomics"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> We have fallback code for computers that don't have 32 bit atomic ops.\n> Of course all modern ISAs have 32 bit atomics, but various comments\n> imagine that a new architecture might be born that we don't have\n> support for yet, so the fallback provides a way to bring a new system\n> up by implementing only the spinlock operations and emulating the\n> rest. This seems pretty strange to me: by the time someone brings an\n> SMP kernel up on a hypothetical new architecture and gets around to\n> porting relational databases, it's hard to imagine that the compiler\n> builtins and C11 atomic support wouldn't be working.\n\nFair point. Another point you could make is that we no longer have\nany test coverage for machines without 32-bit atomic ops.\n\nBut wait, you say, what about mamba-nee-gaur, my HPPA dinosaur?\nThe only actual hardware support there is equivalent to TAS();\nnonetheless, if you read mamba's configure report you'll see it\nclaims to have atomic ops. I wondered if NetBSD was implementing\nthat by using kernel calls to disable interrupts, or something\nequally badly-performing. Turns out they have a pretty cute\nworkaround for it, on HPPA and a couple of other atomics-less\narches they still support. They've written short sequences that\nhave the effect of CAS and are designed to store to memory only\nat the end. To make them atomic, libc asks the kernel \"pretty\nplease, if you happen to notice that I've been interrupted in\nthe PC range from here to here, would you reset the PC to the\nstart of that before returning?\". At least on HPPA, this is\nimplemented for 8-bit, 16-bit, and 32-bit CAS and then all the\nother standard atomics are implemented on top of that, so that\nthe kernel doesn't spend too much time checking for these\naddress ranges when it takes an interrupt.\n\nOf course this only works on single-CPU machines. On multi-CPU\nthere's a completely different implementation that I've not spent\ntime looking at ... but I assume the performance is a lot worse.\n\nAnyway, I think the big picture here is that nowadays we could\nassume that the platform offers this feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Oct 2022 19:44:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Requiring 32 bit atomics"
},
{
"msg_contents": "I wrote:\n> But wait, you say, what about mamba-nee-gaur, my HPPA dinosaur?\n\nsigh ... s/mamba/chickadee/. Got too many NetBSD machines, perhaps.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 Oct 2022 19:46:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Requiring 32 bit atomics"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-27 19:44:13 -0400, Tom Lane wrote:\n> Turns out they have a pretty cute workaround for it, on HPPA and a couple of\n> other atomics-less arches they still support. They've written short\n> sequences that have the effect of CAS and are designed to store to memory\n> only at the end. To make them atomic, libc asks the kernel \"pretty please,\n> if you happen to notice that I've been interrupted in the PC range from here\n> to here, would you reset the PC to the start of that before returning?\".\n\nThat sounds roughly like restartable sequences in the linux world - a pretty\ncool feature. It's too bad that it's not yet available everywhere, it does\nmake some things a lot easier [to make performant].\n\n\n> Anyway, I think the big picture here is that nowadays we could\n> assume that the platform offers this feature.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 27 Oct 2022 17:01:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Requiring 32 bit atomics"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently all backends have LatchWaitSet (latch.c), and most also have\nFeBeWaitSet (pqcomm.c). It's not the end of the world, but it's a\nlittle bit wasteful in terms of kernel resources to have two\nepoll/kqueue descriptors per backend.\n\nI wonder if we should consider merging them into a single\nBackendWaitSet. The reason they exist is because callers of\nWaitLatch() might be woken by the kernel just because data appears on\nthe FeBe socket. One idea is that we could assume that socket\nreadiness events should be rare enough at WaitLatch() sites that it's\nenough to disable them lazily if they are reported. The FeBe code\nalready adjusts as required. For example, if you're waiting for a\nheavyweight lock or condition variable while executing a query, and\npipelined query or COPY data arrives, you'll spuriously wake up, but\nonly once and not again until you eventually reach FeBe read and all\nqueued socket data is drained and more data arrives.\n\nSketch patch attached. Just an idea, not putting into commitfest yet.\n\n(Besides the wasted kernel sources, I also speculate that things get\npretty confusing if you try to switch to completion based APIs for\nmore efficient socket IO on various OSes, depending on how you\nimplement latches. I have some handwavy theories about various\nschemes to achieve that on Linux, Windows and FreeBSD with various\ndifferent problems relating to the existence of two kernel objects.\nWhich is a bit more fuel for my early suspicion that postgres_fdw,\nwhich currently creates and destroys WES, should eventually also use\nBackendWaitSet, which should be dynamically resizing. But that's for\nanother time.)",
"msg_date": "Fri, 28 Oct 2022 15:43:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Merging LatchWaitSet and FeBeWaitSet"
}
] |
[
{
"msg_contents": "Hi,\n\nWe usually want to release lwlocks, and definitely spinlocks, before\ncalling SetLatch(), to avoid putting a system call into the locked\nregion so that we minimise the time held. There are a few places\nwhere we don't do that, possibly because it's not just a simple latch\nto hold a pointer to but rather a set of them that needs to be\ncollected from some data structure and we don't have infrastructure to\nhelp with that. There are also cases where we semi-reliably create\nlock contention, because the backends that wake up immediately try to\nacquire the very same lock.\n\nOne example is heavyweight lock wakeups. If you run BEGIN; LOCK TABLE\nt; ... and then N other sessions wait in SELECT * FROM t;, and then\nyou run ... COMMIT;, you'll see the first session wake all the others\nwhile it still holds the partition lock itself. They'll all wake up\nand begin to re-acquire the same partition lock in exclusive mode,\nimmediately go back to sleep on *that* wait list, and then wake each\nother up one at a time in a chain. We could avoid the first\ndouble-bounce by not setting the latches until after we've released\nthe partition lock. We could avoid the rest of them by not\nre-acquiring the partition lock at all, which ... if I'm reading right\n... shouldn't actually be necessary in modern PostgreSQL? Or if there\nis another reason to re-acquire then maybe the comment should be\nupdated.\n\nPresumably no one really does that repeatedly while there is a long\nqueue of non-conflicting waiters, so I'm not claiming it's a major\nimprovement, but it's at least a micro-optimisation.\n\nThere are some other simpler mechanical changes including synchronous\nreplication, SERIALIZABLE DEFERRABLE and condition variables (this one\ninspired by Yura Sokolov's patches[1]). Actually I'm not at all sure\nabout the CV implementation, I feel like a more ambitious change is\nneeded to make our CVs perform.\n\nSee attached sketch patches. I guess the main thing that may not be\ngood enough is the use of a fixed sized latch buffer. Memory\nallocation in don't-throw-here environments like the guts of lock code\nmight be an issue, which is why it just gives up and flushes when\nfull; maybe it should try to allocate and fall back to flushing only\nif that fails. These sketch patches aren't proposals, just\nobservations in need of more study.\n\n[1] https://postgr.es/m/1edbb61981fe1d99c3f20e3d56d6c88999f4227c.camel%40postgrespro.ru",
"msg_date": "Fri, 28 Oct 2022 16:56:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Latches vs lwlock contention"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 4:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> See attached sketch patches. I guess the main thing that may not be\n> good enough is the use of a fixed sized latch buffer. Memory\n> allocation in don't-throw-here environments like the guts of lock code\n> might be an issue, which is why it just gives up and flushes when\n> full; maybe it should try to allocate and fall back to flushing only\n> if that fails.\n\nHere's an attempt at that. There aren't actually any cases of uses of\nthis stuff in critical sections here, so perhaps I shouldn't bother\nwith that part. The part I'd most like some feedback on is the\nheavyweight lock bits. I'll add this to the commitfest.",
"msg_date": "Wed, 2 Nov 2022 00:09:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 16:40, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Oct 28, 2022 at 4:56 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > See attached sketch patches. I guess the main thing that may not be\n> > good enough is the use of a fixed sized latch buffer. Memory\n> > allocation in don't-throw-here environments like the guts of lock code\n> > might be an issue, which is why it just gives up and flushes when\n> > full; maybe it should try to allocate and fall back to flushing only\n> > if that fails.\n>\n> Here's an attempt at that. There aren't actually any cases of uses of\n> this stuff in critical sections here, so perhaps I shouldn't bother\n> with that part. The part I'd most like some feedback on is the\n> heavyweight lock bits. I'll add this to the commitfest.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\n456fa635a909ee36f73ca84d340521bd730f265f ===\n=== applying patch ./v2-0003-Use-SetLatches-for-condition-variables.patch\npatching file src/backend/storage/lmgr/condition_variable.c\npatching file src/backend/storage/lmgr/lwlock.c\nHunk #1 FAILED at 183.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/storage/lmgr/lwlock.c.rej\npatching file src/include/storage/condition_variable.h\npatching file src/include/storage/lwlock.h\nHunk #1 FAILED at 193.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/include/storage/lwlock.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3998.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 27 Jan 2023 20:09:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "On Sat, Jan 28, 2023 at 3:39 AM vignesh C <vignesh21@gmail.com> wrote:\n> On Tue, 1 Nov 2022 at 16:40, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's an attempt at that. There aren't actually any cases of uses of\n> > this stuff in critical sections here, so perhaps I shouldn't bother\n> > with that part. The part I'd most like some feedback on is the\n> > heavyweight lock bits. I'll add this to the commitfest.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nRebased. I dropped the CV patch for now.",
"msg_date": "Sun, 5 Mar 2023 08:50:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "On 04.03.23 20:50, Thomas Munro wrote:\n> Subject: [PATCH v3 1/6] Allow palloc_extended(NO_OOM) in critical sections.\n> \n> Commit 4a170ee9e0e banned palloc() and similar in critical sections, because an\n> allocation failure would produce a panic. Make an exception for allocation\n> with NULL on failure, for code that has a backup plan.\n\nI suppose this assumes that out of memory is the only possible error \ncondition that we are concerned about for this?\n\nFor example, we sometimes see \"invalid memory alloc request size\" either \nbecause of corrupted data or because code does things we didn't expect. \nThis would then possibly panic? Also, the realloc code paths \npotentially do more work with possibly more error conditions, and/or \nthey error out right away because it's not supported by the context type.\n\nMaybe this is all ok, but it would be good to make the assumptions more \nexplicit.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:58:09 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "Hi,\n\n> Maybe this is all ok, but it would be good to make the assumptions more\n> explicit.\n\nHere are my two cents.\n\n```\nstatic void\nSetLatchV(Latch **latches, int nlatches)\n{\n /* Flush any other changes out to main memory just once. */\n pg_memory_barrier();\n\n /* Keep only latches that are not already set, and set them. */\n for (int i = 0; i < nlatches; ++i)\n {\n Latch *latch = latches[i];\n\n if (!latch->is_set)\n latch->is_set = true;\n else\n latches[i] = NULL;\n }\n\n pg_memory_barrier();\n\n[...]\n\nvoid\nSetLatches(LatchGroup *group)\n{\n if (group->size > 0)\n {\n SetLatchV(group->latches, group->size);\n\n[...]\n```\n\nI suspect this API may be error-prone without some additional\ncomments. The caller (which may be an extension author too) may rely\non the implementation details of SetLatches() / SetLatchV() and use\nthe returned group->latches[] values e.g. to figure out whether he\nattempted to change the state of the given latch. Even worse, one can\nmistakenly assume that the result says exactly if the caller was the\none who changed the state of the latch. This being said I see why this\nparticular implementation was chosen.\n\nI added corresponding comments to SetLatchV() and SetLatches(). Also\nthe patchset needed a rebase. PFA v4.\n\nIt passes `make installcheck-world` on 3 machines of mine: MacOS x64,\nLinux x64 and Linux RISC-V.\n\n\n--\nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 11 Jul 2023 18:11:31 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "On 28/10/2022 06:56, Thomas Munro wrote:\n> One example is heavyweight lock wakeups. If you run BEGIN; LOCK TABLE\n> t; ... and then N other sessions wait in SELECT * FROM t;, and then\n> you run ... COMMIT;, you'll see the first session wake all the others\n> while it still holds the partition lock itself. They'll all wake up\n> and begin to re-acquire the same partition lock in exclusive mode,\n> immediately go back to sleep on*that* wait list, and then wake each\n> other up one at a time in a chain. We could avoid the first\n> double-bounce by not setting the latches until after we've released\n> the partition lock. We could avoid the rest of them by not\n> re-acquiring the partition lock at all, which ... if I'm reading right\n> ... shouldn't actually be necessary in modern PostgreSQL? Or if there\n> is another reason to re-acquire then maybe the comment should be\n> updated.\n\nISTM that the change to not re-aqcuire the lock in ProcSleep is \nindependent from the other changes. Let's split that off to a separate \npatch.\n\nI agree it should be safe. Acquiring a lock just to hold off interrupts \nis overkill anwyway, HOLD_INTERRUPTS() would be enough. \nLockErrorCleanup() uses HOLD_INTERRUPTS() already.\n\nThere are no CHECK_FOR_INTERRUPTS() in GrantAwaitedLock(), so cancel/die \ninterrupts can't happen here. But could we add HOLD_INTERRUPTS(), just \npro forma, to document the assumption? It's a little awkward: you really \nshould hold interrupts until the caller has done \"awaitedLock = NULL;\". \nSo it's not quite enough to add a pair of HOLD_ and RESUME_INTERRUPTS() \nat the end of ProcSleep(). You'd need to do the HOLD_INTERRUPTS() in \nProcSleep() and require the caller to do RESUME_INTERRUPTS(). In a \nsense, ProcSleep downgrades the lock on the partition to just holding \noff interrupts.\n\nOverall +1 on this change to not re-acquire the partition lock.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 28 Sep 2023 12:58:12 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "On 28/09/2023 12:58, Heikki Linnakangas wrote:\n> On 28/10/2022 06:56, Thomas Munro wrote:\n>> One example is heavyweight lock wakeups. If you run BEGIN; LOCK TABLE\n>> t; ... and then N other sessions wait in SELECT * FROM t;, and then\n>> you run ... COMMIT;, you'll see the first session wake all the others\n>> while it still holds the partition lock itself. They'll all wake up\n>> and begin to re-acquire the same partition lock in exclusive mode,\n>> immediately go back to sleep on*that* wait list, and then wake each\n>> other up one at a time in a chain. We could avoid the first\n>> double-bounce by not setting the latches until after we've released\n>> the partition lock. We could avoid the rest of them by not\n>> re-acquiring the partition lock at all, which ... if I'm reading right\n>> ... shouldn't actually be necessary in modern PostgreSQL? Or if there\n>> is another reason to re-acquire then maybe the comment should be\n>> updated.\n> \n> ISTM that the change to not re-aqcuire the lock in ProcSleep is\n> independent from the other changes. Let's split that off to a separate\n> patch.\n\nI spent some time on splitting that off. I had to start from scratch, \nbecause commit 2346df6fc373df9c5ab944eebecf7d3036d727de conflicted \nheavily with your patch.\n\nI split ProcSleep() into two functions: JoinWaitQueue does the first \npart of ProcSleep(), adding the process to the wait queue and checking \nfor the dontWait and early deadlock cases. What remains in ProcSleep() \ndoes just the sleeping part. JoinWaitQueue is called with the partition \nlock held, and ProcSleep() is called without it. This way, the partition \nlock is acquired and released in the same function \n(LockAcquireExtended), avoiding awkward \"lock is held on enter, but \nmight be released on exit depending on the outcome\" logic.\n\nThis is actually a set of 8 patches. The first 7 are independent tiny \nfixes and refactorings in these functions. See individual commit messages.\n\n> I agree it should be safe. Acquiring a lock just to hold off interrupts\n> is overkill anwyway, HOLD_INTERRUPTS() would be enough.\n> LockErrorCleanup() uses HOLD_INTERRUPTS() already.\n> \n> There are no CHECK_FOR_INTERRUPTS() in GrantAwaitedLock(), so cancel/die\n> interrupts can't happen here. But could we add HOLD_INTERRUPTS(), just\n> pro forma, to document the assumption? It's a little awkward: you really\n> should hold interrupts until the caller has done \"awaitedLock = NULL;\".\n> So it's not quite enough to add a pair of HOLD_ and RESUME_INTERRUPTS()\n> at the end of ProcSleep(). You'd need to do the HOLD_INTERRUPTS() in\n> ProcSleep() and require the caller to do RESUME_INTERRUPTS(). In a\n> sense, ProcSleep downgrades the lock on the partition to just holding\n> off interrupts.\n\nI didn't use HOLD/RESUME_INTERRUPTS() after all. Like you did, I'm just \nrelying on the fact that there are no CHECK_FOR_INTERRUPTS() calls in \nplaces where they might cause trouble. Those sections are short, so I \nthink it's fine.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 22 Jul 2024 22:15:37 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Latches vs lwlock contention"
},
{
"msg_contents": "I looked at the patch set and found it quite useful.\n\nThe first 7 patches are just refactoring and may be committed separately if\nneeded.\nThere were minor problems: patch #5 don't want to apply clearly and the #8\nis complained\nabout partitionLock is unused if we build without asserts. So, I add a\nPG_USED_FOR_ASSERTS_ONLY\nto solve the last issue.\n\nAgain, overall patch looks good and seems useful to me. Here is the rebased\nv5 version based on Heikki's patch set above.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 10 Sep 2024 19:53:02 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Latches vs lwlock contention"
}
] |
[
{
"msg_contents": "This adds a new psql command \\gp that works like \\g (or semicolon) but\nuses the extended query protocol. Parameters can also be passed, like\n\n SELECT $1, $2 \\gp 'foo' 'bar'\n\nI have two main purposes for this:\n\nOne, for transparent column encryption [0], we need a way to pass \nprotocol-level parameters. The present patch in the [0] thread uses a \ncommand \\gencr, but based on feedback and further thinking, a \ngeneral-purpose command seems better.\n\nTwo, for testing the extended query protocol from psql. For example, \nfor the dynamic result sets patch [1], I have several ad-hoc libpq test \nprograms lying around, which would be cumbersome to integrate into the \npatch. With psql support like proposed here, it would be very easy to \nintegrate a few equivalent tests.\n\nPerhaps this would also be useful for general psql scripting.\n\n\n[0]: https://commitfest.postgresql.org/40/3718/\n[1]: https://commitfest.postgresql.org/40/2911/",
"msg_date": "Fri, 28 Oct 2022 08:52:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 08:52:51AM +0200, Peter Eisentraut wrote:\n> Two, for testing the extended query protocol from psql. For example, for\n> the dynamic result sets patch [1], I have several ad-hoc libpq test programs\n> lying around, which would be cumbersome to integrate into the patch. With\n> psql support like proposed here, it would be very easy to integrate a few\n> equivalent tests.\n\n+1. As far as I recall, we now have only ECPG to rely on when it\ncomes to coverage of the extended query protocol, but even that has\nits limits. (Haven't looked at the patch)\n--\nMichael",
"msg_date": "Fri, 28 Oct 2022 16:07:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Fri, Oct 28, 2022 at 08:52:51AM +0200, Peter Eisentraut wrote:\n> Perhaps this would also be useful for general psql scripting.\n\n+1\n\nIt makes great sense to that psql would support it (I've suggested to a\nfew people over the last few years to do that using pygres, lacking an\neasier way).\n\nI wondered briefly if normal \\g should change to use the extended\nprotocol. But there ought to be a way to do both/either, so it's better\nhow you wrote it.\n\nOn Fri, Oct 28, 2022 at 04:07:31PM +0900, Michael Paquier wrote:\n> +1. As far as I recall, we now have only ECPG to rely on when it\n> comes to coverage of the extended query protocol, but even that has\n> its limits. (Haven't looked at the patch)\n\nAnd pgbench (see 1ea396362)\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 28 Oct 2022 08:27:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Oct 28, 2022 at 08:52:51AM +0200, Peter Eisentraut wrote:\n>> Two, for testing the extended query protocol from psql. For example, for\n>> the dynamic result sets patch [1], I have several ad-hoc libpq test programs\n>> lying around, which would be cumbersome to integrate into the patch. With\n>> psql support like proposed here, it would be very easy to integrate a few\n>> equivalent tests.\n\n> +1. As far as I recall, we now have only ECPG to rely on when it\n> comes to coverage of the extended query protocol, but even that has\n> its limits. (Haven't looked at the patch)\n\npgbench can be used too, but we lack any infrastructure for using it\nin the regression tests. Something in psql could be a lot more\nhelpful. (I've not studied the patch either.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Oct 2022 09:35:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Fri, 28 Oct 2022 at 07:53, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> This adds a new psql command \\gp that works like \\g (or semicolon) but\n> uses the extended query protocol. Parameters can also be passed, like\n>\n> SELECT $1, $2 \\gp 'foo' 'bar'\n\n+1 for the concept. The patch looks simple and complete.\n\nI find it strange to use it the way you have shown above, i.e. \\gp on\nsame line after a query.\n\nFor me it would be clearer to have tests and docs showing this\n SELECT $1, $2\n \\gp 'foo' 'bar'\n\n> Perhaps this would also be useful for general psql scripting.\n\n...since if we used this in a script, it would be used like this, I think...\n\n SELECT $1, $2\n \\gp 'foo' 'bar'\n \\gp 'bar' 'baz'\n ...\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 1 Nov 2022 09:10:20 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 01.11.22 10:10, Simon Riggs wrote:\n> On Fri, 28 Oct 2022 at 07:53, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> This adds a new psql command \\gp that works like \\g (or semicolon) but\n>> uses the extended query protocol. Parameters can also be passed, like\n>>\n>> SELECT $1, $2 \\gp 'foo' 'bar'\n> \n> +1 for the concept. The patch looks simple and complete.\n> \n> I find it strange to use it the way you have shown above, i.e. \\gp on\n> same line after a query.\n\nThat's how all the \"\\g\" commands work.\n\n> ...since if we used this in a script, it would be used like this, I think...\n> \n> SELECT $1, $2\n> \\gp 'foo' 'bar'\n> \\gp 'bar' 'baz'\n> ...\n\nInteresting, but I think for that we should use named prepared \nstatements, so that would be a separate \"\\gsomething\" command in psql, like\n\n SELECT $1, $2 \\gprep p1\n \\grun p1 'foo' 'bar'\n \\grun p1 'bar' 'baz'\n\n\n\n",
"msg_date": "Tue, 1 Nov 2022 16:47:51 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 20:48, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 01.11.22 10:10, Simon Riggs wrote:\n> > On Fri, 28 Oct 2022 at 07:53, Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >>\n> >> This adds a new psql command \\gp that works like \\g (or semicolon) but\n> >> uses the extended query protocol. Parameters can also be passed, like\n> >>\n> >> SELECT $1, $2 \\gp 'foo' 'bar'\n> >\n> > +1 for the concept. The patch looks simple and complete.\n> >\n> > I find it strange to use it the way you have shown above, i.e. \\gp on\n> > same line after a query.\n>\n> That's how all the \"\\g\" commands work.\n\nYes, I see that, but it also works exactly the way I said also.\n\ni.e.\nSELECT 'foo'\n\\g\n\nis the same thing as\n\nSELECT 'foo' \\g\n\nBut there are no examples in the docs of the latter usage, and so it\nis a surprise to me and probably to others also\n\n> > ...since if we used this in a script, it would be used like this, I think...\n> >\n> > SELECT $1, $2\n> > \\gp 'foo' 'bar'\n> > \\gp 'bar' 'baz'\n> > ...\n>\n> Interesting, but I think for that we should use named prepared\n> statements, so that would be a separate \"\\gsomething\" command in psql, like\n>\n> SELECT $1, $2 \\gprep p1\n> \\grun p1 'foo' 'bar'\n> \\grun p1 'bar' 'baz'\n\nNot sure I understand this... you seem to be arguing against your own\npatch?? I quite liked the way you had it, I'm just asking for the docs\nto put the \\gp on the following line.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 1 Nov 2022 22:58:52 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": ">\n>\n> SELECT $1, $2 \\gp 'foo' 'bar'\n>\n>\nI think this is a great idea, but I foresee people wanting to send that\noutput to a file or a pipe like \\g allows. If we assume everything after\nthe \\gp is a param, don't we paint ourselves into a corner?\n\n\n SELECT $1, $2 \\gp 'foo' 'bar'I think this is a great idea, but I foresee people wanting to send that output to a file or a pipe like \\g allows. If we assume everything after the \\gp is a param, don't we paint ourselves into a corner?",
"msg_date": "Wed, 2 Nov 2022 01:18:54 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "Hi,\n\nOn Fri, 28 Oct 2022 08:52:51 +0200\nPeter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> This adds a new psql command \\gp that works like \\g (or semicolon) but\n> uses the extended query protocol. Parameters can also be passed, like\n> \n> SELECT $1, $2 \\gp 'foo' 'bar'\n\nAs I wrote in my TCE review, would it be possible to use psql vars to set some\nnamed parameters for the prepared query? This would looks like:\n\n \\set p1 foo\n \\set p2 bar\n SELECT :'p1', :'p2' \\gp\n\nThis seems useful when running psql script passing it some variables using\n-v arg. It helps with var position, changing some between exec, repeating them\nin the query, etc.\n\nThoughts?\n\n\n",
"msg_date": "Wed, 2 Nov 2022 13:43:27 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "st 2. 11. 2022 v 13:43 odesílatel Jehan-Guillaume de Rorthais <\njgdr@dalibo.com> napsal:\n\n> Hi,\n>\n> On Fri, 28 Oct 2022 08:52:51 +0200\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>\n> > This adds a new psql command \\gp that works like \\g (or semicolon) but\n> > uses the extended query protocol. Parameters can also be passed, like\n> >\n> > SELECT $1, $2 \\gp 'foo' 'bar'\n>\n> As I wrote in my TCE review, would it be possible to use psql vars to set\n> some\n> named parameters for the prepared query? This would looks like:\n>\n> \\set p1 foo\n> \\set p2 bar\n> SELECT :'p1', :'p2' \\gp\n>\n> This seems useful when running psql script passing it some variables using\n> -v arg. It helps with var position, changing some between exec, repeating\n> them\n> in the query, etc.\n>\n> Thoughts?\n>\n\nI don't think it is possible. The variable evaluation is done before\nparsing the backslash command.\n\nRegards\n\nPavel\n\nst 2. 11. 2022 v 13:43 odesílatel Jehan-Guillaume de Rorthais <jgdr@dalibo.com> napsal:Hi,\n\nOn Fri, 28 Oct 2022 08:52:51 +0200\nPeter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> This adds a new psql command \\gp that works like \\g (or semicolon) but\n> uses the extended query protocol. Parameters can also be passed, like\n> \n> SELECT $1, $2 \\gp 'foo' 'bar'\n\nAs I wrote in my TCE review, would it be possible to use psql vars to set some\nnamed parameters for the prepared query? This would looks like:\n\n \\set p1 foo\n \\set p2 bar\n SELECT :'p1', :'p2' \\gp\n\nThis seems useful when running psql script passing it some variables using\n-v arg. It helps with var position, changing some between exec, repeating them\nin the query, etc.\n\nThoughts?I don't think it is possible. The variable evaluation is done before parsing the backslash command.RegardsPavel",
"msg_date": "Wed, 2 Nov 2022 13:55:22 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "\tJehan-Guillaume de Rorthais wrote:\n\n> As I wrote in my TCE review, would it be possible to use psql vars to set\n> some\n> named parameters for the prepared query? This would looks like:\n> \n> \\set p1 foo\n> \\set p2 bar\n> SELECT :'p1', :'p2' \\gp\n\nAs I understand the feature, variables would be passed like this:\n\n\\set var1 'foo bar'\n\\set var2 'baz''qux'\n\nselect $1, $2 \\gp :var1 :var2\n\n ?column? | ?column? \n----------+----------\n foo bar | baz'qux\n\nIt appears to work fine with the current patch.\n\nThis is consistent with the fact that PQexecParams passes $N\nparameters ouf of the SQL query (versus injecting them in the text of\nthe query) which is also why no quoting is needed.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 02 Nov 2022 16:04:02 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Wed, 02 Nov 2022 16:04:02 +0100\n\"Daniel Verite\" <daniel@manitou-mail.org> wrote:\n\n> \tJehan-Guillaume de Rorthais wrote:\n> \n> > As I wrote in my TCE review, would it be possible to use psql vars to set\n> > some named parameters for the prepared query? This would looks like:\n> > \n> > \\set p1 foo\n> > \\set p2 bar\n> > SELECT :'p1', :'p2' \\gp \n> \n> As I understand the feature, variables would be passed like this:\n> \n> \\set var1 'foo bar'\n> \\set var2 'baz''qux'\n> \n> select $1, $2 \\gp :var1 :var2\n> \n> ?column? | ?column? \n> ----------+----------\n> foo bar | baz'qux\n> \n> It appears to work fine with the current patch.\n\nIndeed, nice.\n\n> This is consistent with the fact that PQexecParams passes $N\n> parameters ouf of the SQL query (versus injecting them in the text of\n> the query)\n\nI was not thinking about injecting them in the texte of the query, this\nwould not be using the extended protocol anymore, or maybe with no parameter,\nbut there's no point.\n\nWhat I was thinking about is psql replacing the variables from the query text\nwith the $N notation before sending it using PQprepare.\n\n> which is also why no quoting is needed.\n\nIndeed, the quotes were not needed in my example.\n\nThanks,\n\n\n",
"msg_date": "Wed, 2 Nov 2022 17:24:35 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 02.11.22 01:18, Corey Huinker wrote:\n> \n> SELECT $1, $2 \\gp 'foo' 'bar'\n> \n> \n> I think this is a great idea, but I foresee people wanting to send that \n> output to a file or a pipe like \\g allows. If we assume everything after \n> the \\gp is a param, don't we paint ourselves into a corner?\n\nAny thoughts on how that syntax could be generalized?\n\n\n",
"msg_date": "Fri, 4 Nov 2022 16:45:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 11:45 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 02.11.22 01:18, Corey Huinker wrote:\n> >\n> > SELECT $1, $2 \\gp 'foo' 'bar'\n> >\n> >\n> > I think this is a great idea, but I foresee people wanting to send that\n> > output to a file or a pipe like \\g allows. If we assume everything after\n> > the \\gp is a param, don't we paint ourselves into a corner?\n>\n> Any thoughts on how that syntax could be generalized?\n>\n\nA few:\n\nThe most compact idea I can think of is to have \\bind and \\endbind (or more\nterse equivalents \\bp and \\ebp)\n\nSELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\bind 'param1' 'param2'\n\\endbind $2 \\g filename.csv\n\nMaybe the end-bind param isn't needed at all, we just insist that bind\nparams be single quoted strings or numbers, so the next slash command ends\nthe bind list.\n\nIf that proves difficult, we might save bind params like registers\n\nsomething like this, positional:\n\n\\bind 1 'param1'\n\\bind 2 'param2'\nSELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\n\\unbind\n\nor all the binds on one line\n\n\\bindmany 'param1' 'param2'\nSELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\n\\unbind\n\nThen psql would merely have to check if it had any bound registers, and if\nso, the next query executed is extended query protocol, and \\unbind wipes\nout the binds to send us back to regular mode.\n\nOn Fri, Nov 4, 2022 at 11:45 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 02.11.22 01:18, Corey Huinker wrote:\n> \n> SELECT $1, $2 \\gp 'foo' 'bar'\n> \n> \n> I think this is a great idea, but I foresee people wanting to send that \n> output to a file or a pipe like \\g allows. If we assume everything after \n> the \\gp is a param, don't we paint ourselves into a corner?\n\nAny thoughts on how that syntax could be generalized?A few:The most compact idea I can think of is to have \\bind and \\endbind (or more terse equivalents \\bp and \\ebp)SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\bind 'param1' 'param2' \\endbind $2 \\g filename.csvMaybe the end-bind param isn't needed at all, we just insist that bind params be single quoted strings or numbers, so the next slash command ends the bind list.If that proves difficult, we might save bind params like registerssomething like this, positional:\\bind 1 'param1'\\bind 2 'param2'SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\\unbindor all the binds on one line\\bindmany 'param1' 'param2'SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\\unbindThen psql would merely have to check if it had any bound registers, and if so, the next query executed is extended query protocol, and \\unbind wipes out the binds to send us back to regular mode.",
"msg_date": "Sat, 5 Nov 2022 02:34:47 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "so 5. 11. 2022 v 7:35 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n> On Fri, Nov 4, 2022 at 11:45 AM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>\n>> On 02.11.22 01:18, Corey Huinker wrote:\n>> >\n>> > SELECT $1, $2 \\gp 'foo' 'bar'\n>> >\n>> >\n>> > I think this is a great idea, but I foresee people wanting to send that\n>> > output to a file or a pipe like \\g allows. If we assume everything\n>> after\n>> > the \\gp is a param, don't we paint ourselves into a corner?\n>>\n>> Any thoughts on how that syntax could be generalized?\n>>\n>\n> A few:\n>\n> The most compact idea I can think of is to have \\bind and \\endbind (or\n> more terse equivalents \\bp and \\ebp)\n>\n> SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\bind 'param1' 'param2'\n> \\endbind $2 \\g filename.csv\n>\n> Maybe the end-bind param isn't needed at all, we just insist that bind\n> params be single quoted strings or numbers, so the next slash command ends\n> the bind list.\n>\n> If that proves difficult, we might save bind params like registers\n>\n> something like this, positional:\n>\n> \\bind 1 'param1'\n> \\bind 2 'param2'\n> SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\n> \\unbind\n>\n> or all the binds on one line\n>\n> \\bindmany 'param1' 'param2'\n> SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\n> \\unbind\n>\n> Then psql would merely have to check if it had any bound registers, and if\n> so, the next query executed is extended query protocol, and \\unbind wipes\n> out the binds to send us back to regular mode.\n>\n\nwhat about introduction new syntax for psql variables that should be passed\nas bind variables.\n\nlike\n\nSELECT * FROM foo WHERE x = $x \\g\n\nany time when this syntax can be used, then extended query protocol will be\nused\n\nand without any variable, the extended query protocol can be forced by psql\nconfig variable\n\nlike\n\n\\set EXTENDED_QUERY_PROTOCOL true\nSELECT 1;\n\nRegards\n\nPavel\n\n\n>\n>\n>\n\nso 5. 11. 2022 v 7:35 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:On Fri, Nov 4, 2022 at 11:45 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 02.11.22 01:18, Corey Huinker wrote:\n> \n> SELECT $1, $2 \\gp 'foo' 'bar'\n> \n> \n> I think this is a great idea, but I foresee people wanting to send that \n> output to a file or a pipe like \\g allows. If we assume everything after \n> the \\gp is a param, don't we paint ourselves into a corner?\n\nAny thoughts on how that syntax could be generalized?A few:The most compact idea I can think of is to have \\bind and \\endbind (or more terse equivalents \\bp and \\ebp)SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\bind 'param1' 'param2' \\endbind $2 \\g filename.csvMaybe the end-bind param isn't needed at all, we just insist that bind params be single quoted strings or numbers, so the next slash command ends the bind list.If that proves difficult, we might save bind params like registerssomething like this, positional:\\bind 1 'param1'\\bind 2 'param2'SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\\unbindor all the binds on one line\\bindmany 'param1' 'param2'SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\g filename.csv\\unbindThen psql would merely have to check if it had any bound registers, and if so, the next query executed is extended query protocol, and \\unbind wipes out the binds to send us back to regular mode.what about introduction new syntax for psql variables that should be passed as bind variables.likeSELECT * FROM foo WHERE x = $x \\gany time when this syntax can be used, then extended query protocol will be usedand without any variable, the extended query protocol can be forced by psql config variablelike\\set EXTENDED_QUERY_PROTOCOL trueSELECT 1;RegardsPavel",
"msg_date": "Sat, 5 Nov 2022 09:46:14 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": ">\n>\n>\n> what about introduction new syntax for psql variables that should be\n> passed as bind variables.\n>\n\nI thought about basically reserving the \\$[0-9]+ space as bind variables,\nbut it is possible, though unlikely, that users have been naming their\nvariables like that.\n\nIt's unclear from your example if that's what you meant, or if you wanted\nactual named variables ($name, $timestamp_before, $x).\n\nActual named variables might cause problems with CREATE FUNCTION AS ...\n$body$ ... $body$; as well as the need to deduplicate them.\n\nSo while it is less seamless, I do like the \\bind x y z \\g idea because it\nrequires no changes in variable interpolation, and the list can be\nterminated with a slash command or ;\n\nTo your point about forcing extended query protocol even when no parameters\nare, that would be SELECT 1 \\bind \\g\n\nIt hasn't been discussed, but the question of how to handle output\nparameters seems fairly straightforward: the value of the bind variable is\nthe name of the psql variable to be set a la \\gset.\n\nwhat about introduction new syntax for psql variables that should be passed as bind variables.I thought about basically reserving the \\$[0-9]+ space as bind variables, but it is possible, though unlikely, that users have been naming their variables like that.It's unclear from your example if that's what you meant, or if you wanted actual named variables ($name, $timestamp_before, $x).Actual named variables might cause problems with CREATE FUNCTION AS ... $body$ ... $body$; as well as the need to deduplicate them.So while it is less seamless, I do like the \\bind x y z \\g idea because it requires no changes in variable interpolation, and the list can be terminated with a slash command or ;To your point about forcing extended query protocol even when no parameters are, that would be SELECT 1 \\bind \\gIt hasn't been discussed, but the question of how to handle output parameters seems fairly straightforward: the value of the bind variable is the name of the psql variable to be set a la \\gset.",
"msg_date": "Mon, 7 Nov 2022 15:27:40 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> I thought about basically reserving the \\$[0-9]+ space as bind variables,\n> but it is possible, though unlikely, that users have been naming their\n> variables like that.\n\nDon't we already reserve that syntax as Params? Not sure whether there\nwould be any conflicts versus Params, but these are definitely not legal\nas SQL identifiers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Nov 2022 16:12:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > I thought about basically reserving the \\$[0-9]+ space as bind variables,\n> > but it is possible, though unlikely, that users have been naming their\n> > variables like that.\n>\n> Don't we already reserve that syntax as Params? Not sure whether there\n> would be any conflicts versus Params, but these are definitely not legal\n> as SQL identifiers.\n>\n> regards, tom lane\n>\n\nI think Pavel was hinting at something like:\n\n\\set $1 foo\n\\set $2 123\nUPDATE mytable SET value = $1 WHERE id = $2;\n\nWhich wouldn't step on anything, because I tested it, and \\set $1 foo\nalready returns 'Invalid variable name \"$1\"'.\n\nSo far, there seem to be two possible variations on how to go about this:\n\n1. Have special variables or a variable namespace that are known to be bind\nvariables. So long as one of them is defined, queries are sent using\nextended query protocol.\n2. Bind parameters one-time-use, applied strictly to the query currently in\nthe buffer in positional order, and once that query is run their\nassociation with being binds is gone.\n\nEach has its merits, I guess it comes down to how much we expect users to\nwant to re-use some or all the bind params of the previous query.\n\nOn Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> I thought about basically reserving the \\$[0-9]+ space as bind variables,\n> but it is possible, though unlikely, that users have been naming their\n> variables like that.\n\nDon't we already reserve that syntax as Params? Not sure whether there\nwould be any conflicts versus Params, but these are definitely not legal\nas SQL identifiers.\n\n regards, tom laneI think Pavel was hinting at something like:\\set $1 foo\\set $2 123UPDATE mytable SET value = $1 WHERE id = $2;Which wouldn't step on anything, because I tested it, and \\set $1 foo already returns 'Invalid variable name \"$1\"'.So far, there seem to be two possible variations on how to go about this:1. Have special variables or a variable namespace that are known to be bind variables. So long as one of them is defined, queries are sent using extended query protocol.2. Bind parameters one-time-use, applied strictly to the query currently in the buffer in positional order, and once that query is run their association with being binds is gone.Each has its merits, I guess it comes down to how much we expect users to want to re-use some or all the bind params of the previous query.",
"msg_date": "Mon, 7 Nov 2022 21:47:28 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "út 8. 11. 2022 v 3:47 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n> On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Corey Huinker <corey.huinker@gmail.com> writes:\n>> > I thought about basically reserving the \\$[0-9]+ space as bind\n>> variables,\n>> > but it is possible, though unlikely, that users have been naming their\n>> > variables like that.\n>>\n>> Don't we already reserve that syntax as Params? Not sure whether there\n>> would be any conflicts versus Params, but these are definitely not legal\n>> as SQL identifiers.\n>>\n>> regards, tom lane\n>>\n>\n> I think Pavel was hinting at something like:\n>\n> \\set $1 foo\n> \\set $2 123\n> UPDATE mytable SET value = $1 WHERE id = $2;\n>\n\nno, I just proposed special syntax for variable usage like bind variable\n\nlike\n\n\\set var Ahoj\n\nSELECT $var;\n\nI think so there should not be problem with custom strings, because we are\nable to push $x to stored procedures, so it should be safe to use it\nelsewhere\n\nWe can use the syntax @var - that is used by pgadmin\n\nRegards\n\nPavel\n\n\n\n\n> Which wouldn't step on anything, because I tested it, and \\set $1 foo\n> already returns 'Invalid variable name \"$1\"'.\n>\n> So far, there seem to be two possible variations on how to go about this:\n>\n> 1. Have special variables or a variable namespace that are known to be\n> bind variables. So long as one of them is defined, queries are sent using\n> extended query protocol.\n> 2. Bind parameters one-time-use, applied strictly to the query currently\n> in the buffer in positional order, and once that query is run their\n> association with being binds is gone.\n>\n> Each has its merits, I guess it comes down to how much we expect users to\n> want to re-use some or all the bind params of the previous query.\n>\n>\n\nút 8. 11. 2022 v 3:47 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> I thought about basically reserving the \\$[0-9]+ space as bind variables,\n> but it is possible, though unlikely, that users have been naming their\n> variables like that.\n\nDon't we already reserve that syntax as Params? Not sure whether there\nwould be any conflicts versus Params, but these are definitely not legal\nas SQL identifiers.\n\n regards, tom laneI think Pavel was hinting at something like:\\set $1 foo\\set $2 123UPDATE mytable SET value = $1 WHERE id = $2;no, I just proposed special syntax for variable usage like bind variablelike\\set var AhojSELECT $var;I think so there should not be problem with custom strings, because we are able to push $x to stored procedures, so it should be safe to use it elsewhereWe can use the syntax @var - that is used by pgadminRegardsPavelWhich wouldn't step on anything, because I tested it, and \\set $1 foo already returns 'Invalid variable name \"$1\"'.So far, there seem to be two possible variations on how to go about this:1. Have special variables or a variable namespace that are known to be bind variables. So long as one of them is defined, queries are sent using extended query protocol.2. Bind parameters one-time-use, applied strictly to the query currently in the buffer in positional order, and once that query is run their association with being binds is gone.Each has its merits, I guess it comes down to how much we expect users to want to re-use some or all the bind params of the previous query.",
"msg_date": "Tue, 8 Nov 2022 05:01:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 9:02 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> út 8. 11. 2022 v 3:47 odesílatel Corey Huinker <corey.huinker@gmail.com>\n> napsal:\n>\n>> On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> Corey Huinker <corey.huinker@gmail.com> writes:\n>>> > I thought about basically reserving the \\$[0-9]+ space as bind\n>>> variables,\n>>> > but it is possible, though unlikely, that users have been naming their\n>>> > variables like that.\n>>>\n>>> Don't we already reserve that syntax as Params? Not sure whether there\n>>> would be any conflicts versus Params, but these are definitely not legal\n>>> as SQL identifiers.\n>>>\n>>> regards, tom lane\n>>>\n>>\n>> I think Pavel was hinting at something like:\n>>\n>> \\set $1 foo\n>> \\set $2 123\n>> UPDATE mytable SET value = $1 WHERE id = $2;\n>>\n>\n> no, I just proposed special syntax for variable usage like bind variable\n>\n> like\n>\n> \\set var Ahoj\n>\n> SELECT $var;\n>\n\nWhy not extend psql conventions for variable specification?\n\nSELECT :$var$;\n\nThus:\n:var => Ahoj\n:'var' => 'Ahoj'\n:\"var\" => \"Ahoj\"\n:$var$ => $n (n => <Ahoj>)\n\nThe downside is it looks like dollar-quoting but isn't actually causing\n<$Ahoj$> to be produced. Instead psql would have to substitute $n at that\nlocation and internally remember that for this query $1 is the contents of\nvar.\n\nI would keep the \\gp meta-command to force extended mode regardless of\nwhether the query itself requires it.\n\nA pset variable to control the default seems reasonable as well. The\nimplication would be that if you set that pset variable there is no way to\nhave individual commands use simple query mode directly.\n\nDavid J.\n\nOn Mon, Nov 7, 2022 at 9:02 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 8. 11. 2022 v 3:47 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> I thought about basically reserving the \\$[0-9]+ space as bind variables,\n> but it is possible, though unlikely, that users have been naming their\n> variables like that.\n\nDon't we already reserve that syntax as Params? Not sure whether there\nwould be any conflicts versus Params, but these are definitely not legal\nas SQL identifiers.\n\n regards, tom laneI think Pavel was hinting at something like:\\set $1 foo\\set $2 123UPDATE mytable SET value = $1 WHERE id = $2;no, I just proposed special syntax for variable usage like bind variablelike\\set var AhojSELECT $var;Why not extend psql conventions for variable specification?SELECT :$var$;Thus::var => Ahoj:'var' => 'Ahoj':\"var\" => \"Ahoj\":$var$ => $n (n => <Ahoj>)The downside is it looks like dollar-quoting but isn't actually causing <$Ahoj$> to be produced. Instead psql would have to substitute $n at that location and internally remember that for this query $1 is the contents of var.I would keep the \\gp meta-command to force extended mode regardless of whether the query itself requires it.A pset variable to control the default seems reasonable as well. The implication would be that if you set that pset variable there is no way to have individual commands use simple query mode directly.David J.",
"msg_date": "Mon, 7 Nov 2022 21:21:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "út 8. 11. 2022 v 5:21 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Mon, Nov 7, 2022 at 9:02 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> út 8. 11. 2022 v 3:47 odesílatel Corey Huinker <corey.huinker@gmail.com>\n>> napsal:\n>>\n>>> On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>>> Corey Huinker <corey.huinker@gmail.com> writes:\n>>>> > I thought about basically reserving the \\$[0-9]+ space as bind\n>>>> variables,\n>>>> > but it is possible, though unlikely, that users have been naming their\n>>>> > variables like that.\n>>>>\n>>>> Don't we already reserve that syntax as Params? Not sure whether there\n>>>> would be any conflicts versus Params, but these are definitely not legal\n>>>> as SQL identifiers.\n>>>>\n>>>> regards, tom lane\n>>>>\n>>>\n>>> I think Pavel was hinting at something like:\n>>>\n>>> \\set $1 foo\n>>> \\set $2 123\n>>> UPDATE mytable SET value = $1 WHERE id = $2;\n>>>\n>>\n>> no, I just proposed special syntax for variable usage like bind variable\n>>\n>> like\n>>\n>> \\set var Ahoj\n>>\n>> SELECT $var;\n>>\n>\n> Why not extend psql conventions for variable specification?\n>\n> SELECT :$var$;\n>\n> Thus:\n> :var => Ahoj\n> :'var' => 'Ahoj'\n> :\"var\" => \"Ahoj\"\n> :$var$ => $n (n => <Ahoj>)\n>\n> The downside is it looks like dollar-quoting but isn't actually causing\n> <$Ahoj$> to be produced. Instead psql would have to substitute $n at that\n> location and internally remember that for this query $1 is the contents of\n> var.\n>\n> I would keep the \\gp meta-command to force extended mode regardless of\n> whether the query itself requires it.\n>\n> A pset variable to control the default seems reasonable as well. The\n> implication would be that if you set that pset variable there is no way to\n> have individual commands use simple query mode directly.\n>\n\n:$var$ looks little bit scary, and there can be risk of collision with\ncustom string separator\n\nbut :$var can be ok?\n\nThere is not necessity of showing symmetry\n\n\n\n\n\n\n>\n> David J.\n>\n\nút 8. 11. 2022 v 5:21 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Mon, Nov 7, 2022 at 9:02 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:út 8. 11. 2022 v 3:47 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:On Mon, Nov 7, 2022 at 4:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> I thought about basically reserving the \\$[0-9]+ space as bind variables,\n> but it is possible, though unlikely, that users have been naming their\n> variables like that.\n\nDon't we already reserve that syntax as Params? Not sure whether there\nwould be any conflicts versus Params, but these are definitely not legal\nas SQL identifiers.\n\n regards, tom laneI think Pavel was hinting at something like:\\set $1 foo\\set $2 123UPDATE mytable SET value = $1 WHERE id = $2;no, I just proposed special syntax for variable usage like bind variablelike\\set var AhojSELECT $var;Why not extend psql conventions for variable specification?SELECT :$var$;Thus::var => Ahoj:'var' => 'Ahoj':\"var\" => \"Ahoj\":$var$ => $n (n => <Ahoj>)The downside is it looks like dollar-quoting but isn't actually causing <$Ahoj$> to be produced. Instead psql would have to substitute $n at that location and internally remember that for this query $1 is the contents of var.I would keep the \\gp meta-command to force extended mode regardless of whether the query itself requires it.A pset variable to control the default seems reasonable as well. The implication would be that if you set that pset variable there is no way to have individual commands use simple query mode directly.:$var$ looks little bit scary, and there can be risk of collision with custom string separatorbut :$var can be ok?There is not necessity of showing symmetry David J.",
"msg_date": "Tue, 8 Nov 2022 05:29:41 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "\tDavid G. Johnston wrote:\n\n> I would keep the \\gp meta-command to force extended mode regardless\n> of whether the query itself requires it.\n\n+1\n\n> A pset variable to control the default seems reasonable as well.\n> The implication would be that if you set that pset variable there is\n> no way to have individual commands use simple query mode directly.\n\n+1 except that it would be a \\set variable for consistency with the\nother execution-controlling variables. \\pset variables control only\nthe display.\n\nBTW if we wanted to auto-detect that a query requires binding or the\nextended query protocol, we need to keep in mind that for instance\n\"PREPARE stmt AS $1\" must pass without binding, with both the simple\nand the extended query protocol.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 08 Nov 2022 13:02:17 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 05.11.22 07:34, Corey Huinker wrote:\n> The most compact idea I can think of is to have \\bind and \\endbind (or \n> more terse equivalents \\bp and \\ebp)\n> \n> SELECT * FROM foo WHERE type_id = $1 AND cost > $2 \\bind 'param1' \n> 'param2' \\endbind $2 \\g filename.csv\n\nI like it. It makes my code even simpler, and it allows using all the \ndifferent \\g variants transparently. See attached patch.\n\n> Maybe the end-bind param isn't needed at all, we just insist that bind \n> params be single quoted strings or numbers, so the next slash command \n> ends the bind list.\n\nRight, the end-bind isn't needed.\n\nBtw., this also allows doing things like\n\nSELECT $1, $2\n\\bind '1' '2' \\g\n\\bind '3' '4' \\g\n\nThis isn't a prepared statement being reused, but it relies on the fact \nthat psql \\g with an empty query buffer resends the previous query. \nStill kind of neat.",
"msg_date": "Tue, 8 Nov 2022 13:37:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 08.11.22 13:02, Daniel Verite wrote:\n>> A pset variable to control the default seems reasonable as well.\n>> The implication would be that if you set that pset variable there is\n>> no way to have individual commands use simple query mode directly.\n> +1 except that it would be a \\set variable for consistency with the\n> other execution-controlling variables. \\pset variables control only\n> the display.\n\nIs there a use case for a global setting?\n\nIt seems to me that that would be just another thing that a \nsuper-careful psql script would have to reset to get a consistent \nstarting state.\n\n\n\n",
"msg_date": "Tue, 8 Nov 2022 13:39:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": ">\n>\n> Btw., this also allows doing things like\n>\n> SELECT $1, $2\n> \\bind '1' '2' \\g\n> \\bind '3' '4' \\g\n>\n\nThat's one of the things I was hoping for. Very cool.\n\n\n>\n> This isn't a prepared statement being reused, but it relies on the fact\n> that psql \\g with an empty query buffer resends the previous query.\n> Still kind of neat.\n\n\nYeah, if they wanted a prepared statement there's nothing stopping them.\n\nReview:\n\nPatch applies, tests pass.\n\nCode is quite straightforward.\n\nAs for the docs, they're very clear and probably sufficient as-is, but I\nwonder if we should we explicitly state that the bind-state and bind\nparameters do not \"stay around\" after the query is executed? Suggestions in\nbold:\n\n This command causes the extended query protocol (see <xref\n linkend=\"protocol-query-concepts\"/>) to be used, unlike normal\n <application>psql</application> operation, which uses the simple\n query protocol. *Extended query protocol will be used* *even if\nno parameters are specified, s*o this command can be useful to test the\nextended\n query protocol from psql. *This command affects only the next\nquery executed, all subsequent queries will use the regular query protocol\nby default.*\n\nTests seem comprehensive. I went looking for the TAP test that this would\nhave replaced, but found none, and it seems the only test where we exercise\nPQsendQueryParams is libpq_pipeline.c, so these tests are a welcome\naddition.\n\nAside from the possible doc change, it looks ready to go.\n\nBtw., this also allows doing things like\n\nSELECT $1, $2\n\\bind '1' '2' \\g\n\\bind '3' '4' \\gThat's one of the things I was hoping for. Very cool. \n\nThis isn't a prepared statement being reused, but it relies on the fact \nthat psql \\g with an empty query buffer resends the previous query. \nStill kind of neat.Yeah, if they wanted a prepared statement there's nothing stopping them.Review:Patch applies, tests pass.Code is quite straightforward.As for the docs, they're very clear and probably sufficient as-is, but I wonder if we should we explicitly state that the bind-state and bind parameters do not \"stay around\" after the query is executed? Suggestions in bold: This command causes the extended query protocol (see <xref linkend=\"protocol-query-concepts\"/>) to be used, unlike normal <application>psql</application> operation, which uses the simple query protocol. Extended query protocol will be used even if no parameters are specified, so this command can be useful to test the extended query protocol from psql. This command affects only the next query executed, all subsequent queries will use the regular query protocol by default.Tests seem comprehensive. I went looking for the TAP test that this would have replaced, but found none, and it seems the only test where we exercise PQsendQueryParams is libpq_pipeline.c, so these tests are a welcome addition.Aside from the possible doc change, it looks ready to go.",
"msg_date": "Tue, 8 Nov 2022 18:12:14 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> Is there a use case for a global setting?\n\nI assume that we may sometimes want to use the\nextended protocol on all queries of a script, like\npgbench does with --protocol=extended.\nOutside of psql, it's too complicated to parse a SQL script to\nreplace the end-of-query semicolons with \\gp, whereas\na psql setting solves this effortlessly.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 09 Nov 2022 20:10:34 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 09.11.22 20:10, Daniel Verite wrote:\n> \tPeter Eisentraut wrote:\n> \n>> Is there a use case for a global setting?\n> \n> I assume that we may sometimes want to use the\n> extended protocol on all queries of a script, like\n> pgbench does with --protocol=extended.\n\nBut is there an actual use case for this in psql? In pgbench, there are \nscenarios where you want to test aspects of prepared statements, plan \ncaching, and so on. Is there something like that for psql?\n\n\n\n",
"msg_date": "Fri, 11 Nov 2022 16:09:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> > I assume that we may sometimes want to use the\n> > extended protocol on all queries of a script, like\n> > pgbench does with --protocol=extended.\n> \n> But is there an actual use case for this in psql? In pgbench, there are \n> scenarios where you want to test aspects of prepared statements, plan \n> caching, and so on. Is there something like that for psql?\n\nIf we set aside \"exercising the protocol\" as not an interesting use case\nfor psql, then no, I can't think of any benefit.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:47:35 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 09.11.22 00:12, Corey Huinker wrote:\n> As for the docs, they're very clear and probably sufficient as-is, but I \n> wonder if we should we explicitly state that the bind-state and bind \n> parameters do not \"stay around\" after the query is executed? Suggestions \n> in bold:\n> \n> This command causes the extended query protocol (see <xref\n> linkend=\"protocol-query-concepts\"/>) to be used, unlike normal\n> <application>psql</application> operation, which uses the simple\n> query protocol. *Extended query protocol will be used* *even \n> if no parameters are specified, s*o this command can be useful to test \n> the extended\n> query protocol from psql. *This command affects only the next \n> query executed, all subsequent queries will use the regular query \n> protocol by default.*\n> \n> Tests seem comprehensive. I went looking for the TAP test that this \n> would have replaced, but found none, and it seems the only test where we \n> exercise PQsendQueryParams is libpq_pipeline.c, so these tests are a \n> welcome addition.\n> \n> Aside from the possible doc change, it looks ready to go.\n\nCommitted with those doc changes. Thanks.\n\n\n\n",
"msg_date": "Tue, 15 Nov 2022 14:29:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 8:29 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 09.11.22 00:12, Corey Huinker wrote:\n> > As for the docs, they're very clear and probably sufficient as-is, but I\n> > wonder if we should we explicitly state that the bind-state and bind\n> > parameters do not \"stay around\" after the query is executed? Suggestions\n> > in bold:\n> >\n> > This command causes the extended query protocol (see <xref\n> > linkend=\"protocol-query-concepts\"/>) to be used, unlike normal\n> > <application>psql</application> operation, which uses the\n> simple\n> > query protocol. *Extended query protocol will be used* *even\n> > if no parameters are specified, s*o this command can be useful to test\n> > the extended\n> > query protocol from psql. *This command affects only the next\n> > query executed, all subsequent queries will use the regular query\n> > protocol by default.*\n> >\n> > Tests seem comprehensive. I went looking for the TAP test that this\n> > would have replaced, but found none, and it seems the only test where we\n> > exercise PQsendQueryParams is libpq_pipeline.c, so these tests are a\n> > welcome addition.\n> >\n> > Aside from the possible doc change, it looks ready to go.\n>\n> Committed with those doc changes. Thanks.\n>\n>\nI got thinking about this, and while things may be fine as-is, I would like\nto hear some opinions as to whether this behavior is correct:\n\nString literals can include spaces\n\n[16:51:35 EST] corey=# select $1, $2 \\bind 'abc def' gee \\g\n ?column? | ?column?\n----------+----------\n abc def | gee\n(1 row)\n\n\nString literal includes spaces, but also includes quotes:\n\nTime: 0.363 ms\n[16:51:44 EST] corey=# select $1, $2 \\bind \"abc def\" gee \\g\n ?column? | ?column?\n-----------+----------\n \"abc def\" | gee\n(1 row)\n\nSemi-colon does not terminate an EQP statement, ';' is seen as a parameter:\n\n[16:51:47 EST] corey=# select $1, $2 \\bind \"abc def\" gee ;\ncorey-# \\g\nERROR: bind message supplies 3 parameters, but prepared statement \"\"\nrequires 2\n\n\nConfirming that slash-commands must be unquoted\n\n[16:52:23 EST] corey=# select $1, $2 \\bind \"abc def\" '\\\\g' \\g\n ?column? | ?column?\n-----------+----------\n \"abc def\" | \\g\n(1 row)\n\n[16:59:00 EST] corey=# select $1, $2 \\bind \"abc def\" '\\watch' \\g\n ?column? | ?column?\n-----------+----------\n \"abc def\" | watch\n(1 row)\n\nConfirming that any slash command terminates the bind list, but ';' does not\n\n[16:59:54 EST] corey=# select $1, $2 \\bind \"abc def\" gee \\watch 5\nMon 21 Nov 2022 05:00:07 PM EST (every 5s)\n\n ?column? | ?column?\n-----------+----------\n \"abc def\" | gee\n(1 row)\n\nTime: 0.422 ms\nMon 21 Nov 2022 05:00:12 PM EST (every 5s)\n\n ?column? | ?column?\n-----------+----------\n \"abc def\" | gee\n(1 row)\n\nIs this all working as expected?\n\nOn Tue, Nov 15, 2022 at 8:29 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 09.11.22 00:12, Corey Huinker wrote:\n> As for the docs, they're very clear and probably sufficient as-is, but I \n> wonder if we should we explicitly state that the bind-state and bind \n> parameters do not \"stay around\" after the query is executed? Suggestions \n> in bold:\n> \n> This command causes the extended query protocol (see <xref\n> linkend=\"protocol-query-concepts\"/>) to be used, unlike normal\n> <application>psql</application> operation, which uses the simple\n> query protocol. *Extended query protocol will be used* *even \n> if no parameters are specified, s*o this command can be useful to test \n> the extended\n> query protocol from psql. *This command affects only the next \n> query executed, all subsequent queries will use the regular query \n> protocol by default.*\n> \n> Tests seem comprehensive. I went looking for the TAP test that this \n> would have replaced, but found none, and it seems the only test where we \n> exercise PQsendQueryParams is libpq_pipeline.c, so these tests are a \n> welcome addition.\n> \n> Aside from the possible doc change, it looks ready to go.\n\nCommitted with those doc changes. Thanks.\nI got thinking about this, and while things may be fine as-is, I would like to hear some opinions as to whether this behavior is correct:String literals can include spaces[16:51:35 EST] corey=# select $1, $2 \\bind 'abc def' gee \\g ?column? | ?column? ----------+---------- abc def | gee(1 row)String literal includes spaces, but also includes quotes:Time: 0.363 ms[16:51:44 EST] corey=# select $1, $2 \\bind \"abc def\" gee \\g ?column? | ?column? -----------+---------- \"abc def\" | gee(1 row)Semi-colon does not terminate an EQP statement, ';' is seen as a parameter:[16:51:47 EST] corey=# select $1, $2 \\bind \"abc def\" gee ;corey-# \\gERROR: bind message supplies 3 parameters, but prepared statement \"\" requires 2Confirming that slash-commands must be unquoted[16:52:23 EST] corey=# select $1, $2 \\bind \"abc def\" '\\\\g' \\g ?column? | ?column? -----------+---------- \"abc def\" | \\g(1 row)[16:59:00 EST] corey=# select $1, $2 \\bind \"abc def\" '\\watch' \\g ?column? | ?column? -----------+---------- \"abc def\" | watch(1 row)Confirming that any slash command terminates the bind list, but ';' does not[16:59:54 EST] corey=# select $1, $2 \\bind \"abc def\" gee \\watch 5Mon 21 Nov 2022 05:00:07 PM EST (every 5s) ?column? | ?column? -----------+---------- \"abc def\" | gee(1 row)Time: 0.422 msMon 21 Nov 2022 05:00:12 PM EST (every 5s) ?column? | ?column? -----------+---------- \"abc def\" | gee(1 row)Is this all working as expected?",
"msg_date": "Mon, 21 Nov 2022 17:02:08 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 21.11.22 23:02, Corey Huinker wrote:\n> I got thinking about this, and while things may be fine as-is, I would \n> like to hear some opinions as to whether this behavior is correct:\n\nThis is all psql syntax, nothing specific to this command. The only \nleeway is choosing the appropriate enum slash_option_type, but the \nchoices other than OT_NORMAL don't seem to be particularly applicable to \nthis.\n\n\n\n",
"msg_date": "Tue, 22 Nov 2022 16:51:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "In one of my environments, this feature didn't work as expected. Digging into it, I found that it is incompatible with FETCH_COUNT being set. Sorry for not recognising this during the betas.\n\nAttached a simple patch with tests running the cursor declaration through PQexecParams instead of PGexec.\n\nAlternatively, we could avoid going to ExecQueryUsingCursor and force execution via ExecQueryAndProcessResults in SendQuery (around line 1134 in src/bin/psql/common.c) when \\bind is used:\n\n\telse if (pset.fetch_count <= 0 || pset.gexec_flag ||\n-\t\t\t pset.crosstab_flag || !is_select_command(query))\n+\t\t\t pset.crosstab_flag || !is_select_command(query) ||\n+\t\t\t pset.bind_flag)\n\nbest regards\nTobias",
"msg_date": "Thu, 14 Sep 2023 22:26:55 +0200",
"msg_from": "Tobias Bussmann <t.bussmann@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "On 2023-Sep-14, Tobias Bussmann wrote:\n\n> In one of my environments, this feature didn't work as expected.\n> Digging into it, I found that it is incompatible with FETCH_COUNT\n> being set. Sorry for not recognising this during the betas.\n> \n> Attached a simple patch with tests running the cursor declaration\n> through PQexecParams instead of PGexec.\n\nHmm, strange. I had been trying to make \\bind work with extended\nprotocol, and my findings were that there's interactions with the code\nthat was added for pipeline mode(*). I put research aside to work on\nother things, but intended to get back to it soon ... I'm really\nsurprised that it works for you here.\n\nMaybe your tests are just not extensive enough to show that it fails.\n\n(*) This is not actually proven, but Peter had told me that his \\bind\nstuff had previously worked when he first implemented it before pipeline\nlanded. Because that's the only significant change that has happened to\nthe libpq code lately, it's a reasonable hypothesis.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:51:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
},
{
"msg_contents": "Hi,\n\n> > In one of my environments, this feature didn't work as expected.\n> > Digging into it, I found that it is incompatible with FETCH_COUNT\n> > being set. Sorry for not recognising this during the betas.\n> >\n> > Attached a simple patch with tests running the cursor declaration\n> > through PQexecParams instead of PGexec.\n>\n> Hmm, strange. I had been trying to make \\bind work with extended\n> protocol, and my findings were that there's interactions with the code\n> that was added for pipeline mode(*). I put research aside to work on\n> other things, but intended to get back to it soon ... I'm really\n> surprised that it works for you here.\n>\n> Maybe your tests are just not extensive enough to show that it fails.\n>\n> (*) This is not actually proven, but Peter had told me that his \\bind\n> stuff had previously worked when he first implemented it before pipeline\n> landed. Because that's the only significant change that has happened to\n> the libpq code lately, it's a reasonable hypothesis.\n\nA colleague of mine is very excited about the new \\bind functionality\nin psql. However he is puzzled by the fact that there is no obvious\nway to bind a NULL value, except for something like:\n\n```\ncreate table t (v text);\ninsert into t values (case when $1 = '' then NULL else $1 end) \\bind '' \\g\nselect v, v is null from t;\n```\n\nMaybe we should also support something like ... \\bind val1 \\null val3 \\g ?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 16 Feb 2024 18:15:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: psql: Add command to use extended query protocol"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now it is possible to add a partitioned table with foreign tables\nas its children as a target of a subscription. It can lead to an assert\n(or a segfault, if compiled without asserts) on a logical replication\nworker when the worker attempts to insert the data received via\nreplication into the foreign table. Reproduce with caution, the worker\nis going to crash and restart indefinitely. The setup:\n\nPublisher on 5432 port:\n\nCREATE TABLE parent (id int, num int);\nCREATE PUBLICATION parent_pub FOR TABLE parent;\n\nSubscriber on 5433 port:\n\nCREATE EXTENSION postgres_fdw;\nCREATE SERVER loopback foreign data wrapper postgres_fdw options (host\n'127.0.0.1', port '5433', dbname 'postgres');\nCREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\nCREATE TABLE parent (id int, num int) partition by range (id);\nCREATE FOREIGN TABLE p1 PARTITION OF parent DEFAULT SERVER loopback;\nCREATE TABLE p1_loc(id int, num int);\nCREATE SUBSCRIPTION parent_sub CONNECTION 'host=127.0.0.1 port=5432\ndbname=postgres' PUBLICATION parent_pub;\n\nThen run an insert on the publisher: INSERT INTO parent VALUES (1, 1);\n\nThis will cause a segfault or raise an assert, because inserting into\nforeign tables via logical replication is not possible. The solution I\npropose is to add recursive checks of relkind for children of a target,\nif the target is a partitioned table. I have attached a patch for this\nand managed to reproduce this on REL_14_STABLE as well, not sure if a\npatch for that version is also needed.\n\nKind Regards,\nIlya Gladyshev",
"msg_date": "Fri, 28 Oct 2022 23:31:09 +0400",
"msg_from": "ilya.v.gladyshev@gmail.com",
"msg_from_op": true,
"msg_subject": "Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "On Sat, Oct 29, 2022 at 1:01 AM <ilya.v.gladyshev@gmail.com> wrote:\n>\n> Right now it is possible to add a partitioned table with foreign tables\n> as its children as a target of a subscription. It can lead to an assert\n> (or a segfault, if compiled without asserts) on a logical replication\n> worker when the worker attempts to insert the data received via\n> replication into the foreign table. Reproduce with caution, the worker\n> is going to crash and restart indefinitely. The setup:\n\nYes, this looks like a bug and your fix seems correct to me. It would\nbe nice to add a test case for this scenario.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 30 Oct 2022 15:16:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> Yes, this looks like a bug and your fix seems correct to me. It would\n> be nice to add a test case for this scenario.\n\nA test case doesn't seem that exciting to me. If we were trying to\nmake it actually work, then yeah, but throwing an error isn't that\nuseful to test. The code will be exercised by replication to a\nregular partitioned table (I assume we do have tests for that).\n\nWhat I'm wondering about is whether we can refactor this code\nto avoid so many usually-useless catalog lookups. Pulling the\nnamespace name, in particular, is expensive and we generally\nare not going to need the result. In the child-rel case it'd\nbe much better to pass the opened relation and let the error-check\nsubroutine work from that. Maybe we should just do it like that\nat the start of the recursion, too? Or pass the relid and let\nthe subroutine look up the names only in the error case.\n\nA completely different line of thought is that this doesn't seem\nlike a terribly bulletproof fix, since children could get added to\na partitioned table after we look. Maybe it'd be better to check\nthe relkind at the last moment before we do something that depends\non a child table being a relation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 30 Oct 2022 09:39:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "On 2022-Oct-28, ilya.v.gladyshev@gmail.com wrote:\n\n> This will cause a segfault or raise an assert, because inserting into\n> foreign tables via logical replication is not possible. The solution I\n> propose is to add recursive checks of relkind for children of a target,\n> if the target is a partitioned table.\n\nHowever, I imagine that the only reason we don't support this is that\nthe code hasn't been written yet. I think it would be better to write\nthat code, so that we don't have to raise any error at all (unless the\nforeign table is something that doesn't support DML, in which case we\nwould have to raise an error). Of course, we still have to fix it in\nbackbranches, but we can just do it as a targeted check at the moment of\ninsert/update, not at the moment of subscription create/alter.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n",
"msg_date": "Sun, 30 Oct 2022 16:52:39 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with\n foreign children"
},
{
"msg_contents": "On Sun, Oct 30, 2022 9:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> What I'm wondering about is whether we can refactor this code\n> to avoid so many usually-useless catalog lookups. Pulling the\n> namespace name, in particular, is expensive and we generally\n> are not going to need the result. In the child-rel case it'd\n> be much better to pass the opened relation and let the error-check\n> subroutine work from that. Maybe we should just do it like that\n> at the start of the recursion, too? Or pass the relid and let\n> the subroutine look up the names only in the error case.\n> \n> A completely different line of thought is that this doesn't seem\n> like a terribly bulletproof fix, since children could get added to\n> a partitioned table after we look. Maybe it'd be better to check\n> the relkind at the last moment before we do something that depends\n> on a child table being a relation.\n> \n\nI agree. So maybe we can add this check in apply_handle_tuple_routing().\n\ndiff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c\nindex 5250ae7f54..e941b68e4b 100644\n--- a/src/backend/replication/logical/worker.c\n+++ b/src/backend/replication/logical/worker.c\n@@ -2176,6 +2176,10 @@ apply_handle_tuple_routing(ApplyExecutionData *edata,\n Assert(partrelinfo != NULL);\n partrel = partrelinfo->ri_RelationDesc;\n\n+ /* Check for supported relkind. */\n+ CheckSubscriptionRelkind(partrel->rd_rel->relkind,\n+ relmapentry->remoterel.nspname, relmapentry->remoterel.relname);\n+\n /*\n * To perform any of the operations below, the tuple must match the\n * partition's rowtype. Convert if needed or just copy, using a dedicated\n\n\nRegards,\nShi yu\n\n\n",
"msg_date": "Mon, 31 Oct 2022 03:20:25 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "On Sun, Oct 30, 2022 at 7:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > Yes, this looks like a bug and your fix seems correct to me. It would\n> > be nice to add a test case for this scenario.\n>\n> A test case doesn't seem that exciting to me. If we were trying to\n> make it actually work, then yeah, but throwing an error isn't that\n> useful to test. The code will be exercised by replication to a\n> regular partitioned table (I assume we do have tests for that).\n\nThat's true, but we missed this case because of the absence of the\ntest case so I thought at least we can add it now to catch any future\nbug in case of any behavior change.\n\n> A completely different line of thought is that this doesn't seem\n> like a terribly bulletproof fix, since children could get added to\n> a partitioned table after we look. Maybe it'd be better to check\n> the relkind at the last moment before we do something that depends\n> on a child table being a relation.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Oct 2022 14:18:29 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "\nOn Sun, 2022-10-30 at 09:39 -0400, Tom Lane wrote:\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > Yes, this looks like a bug and your fix seems correct to me. It\n> > would\n> > be nice to add a test case for this scenario.\n> \n> A test case doesn't seem that exciting to me. If we were trying to\n> make it actually work, then yeah, but throwing an error isn't that\n> useful to test. The code will be exercised by replication to a\n> regular partitioned table (I assume we do have tests for that).\n> \n> What I'm wondering about is whether we can refactor this code\n> to avoid so many usually-useless catalog lookups. Pulling the\n> namespace name, in particular, is expensive and we generally\n> are not going to need the result. In the child-rel case it'd\n> be much better to pass the opened relation and let the error-check\n> subroutine work from that. Maybe we should just do it like that\n> at the start of the recursion, too? Or pass the relid and let\n> the subroutine look up the names only in the error case.\n\nSure, I think passing in the opened relation is a good idea.\n\n> A completely different line of thought is that this doesn't seem\n> like a terribly bulletproof fix, since children could get added to\n> a partitioned table after we look. Maybe it'd be better to check\n> the relkind at the last moment before we do something that depends\n> on a child table being a relation.\n\nThese checks are run both on subscription DDL commands, which is good\nto get some early feedback, and inside logical_rel_open(), right before\nsomething useful is about to get done to the relation, so we should be\ngood here. I think some tests would actually be nice to verify this,\nbut I don't really have a strong opinion about it.\n\nI'll refactor the patch and post a bit later.\n\n\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:15:48 +0400",
"msg_from": "ilya.v.gladyshev@gmail.com",
"msg_from_op": true,
"msg_subject": "Re: Segfault on logical replication to partitioned table with\n foreign children"
},
{
"msg_contents": "On Sun, 2022-10-30 at 16:52 +0100, Alvaro Herrera wrote:\n> On 2022-Oct-28, ilya.v.gladyshev@gmail.com wrote:\n> \n> > This will cause a segfault or raise an assert, because inserting\n> > into\n> > foreign tables via logical replication is not possible. The\n> > solution I\n> > propose is to add recursive checks of relkind for children of a\n> > target,\n> > if the target is a partitioned table.\n> \n> However, I imagine that the only reason we don't support this is that\n> the code hasn't been written yet. I think it would be better to write\n> that code, so that we don't have to raise any error at all (unless\n> the\n> foreign table is something that doesn't support DML, in which case we\n> would have to raise an error). Of course, we still have to fix it in\n> backbranches, but we can just do it as a targeted check at the moment\n> of\n> insert/update, not at the moment of subscription create/alter.\n> \n\nSure, this patch is just a quick fix. A proper implementation of\nlogical replication into foreign tables would be a much more difficult\nundertaking. I think this patch is simple enough, the checks in the\npatch are performed both on subscription DDL and when the relation is\nopened for logical replication, so it gives both early feedback and\nlast-minute checks as well. All the code infrastructure for these kinds\nof checks is already in place, so I think it's a good idea to use it.\n\nP.S. sorry, duplicating the message, forgot to cc the mailing list the\nfirst time\n\n\n\n",
"msg_date": "Tue, 01 Nov 2022 01:52:19 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with\n foreign children"
},
{
"msg_contents": "On Mon, 2022-10-31 at 03:20 +0000, shiy.fnst@fujitsu.com wrote:\n> On Sun, Oct 30, 2022 9:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > What I'm wondering about is whether we can refactor this code\n> > to avoid so many usually-useless catalog lookups. Pulling the\n> > namespace name, in particular, is expensive and we generally\n> > are not going to need the result. In the child-rel case it'd\n> > be much better to pass the opened relation and let the error-check\n> > subroutine work from that. Maybe we should just do it like that\n> > at the start of the recursion, too? Or pass the relid and let\n> > the subroutine look up the names only in the error case.\n> > \n> > A completely different line of thought is that this doesn't seem\n> > like a terribly bulletproof fix, since children could get added to\n> > a partitioned table after we look. Maybe it'd be better to check\n> > the relkind at the last moment before we do something that depends\n> > on a child table being a relation.\n> > \n> \n> I agree. So maybe we can add this check in\n> apply_handle_tuple_routing().\n> \n> diff --git a/src/backend/replication/logical/worker.c\n> b/src/backend/replication/logical/worker.c\n> index 5250ae7f54..e941b68e4b 100644\n> --- a/src/backend/replication/logical/worker.c\n> +++ b/src/backend/replication/logical/worker.c\n> @@ -2176,6 +2176,10 @@ apply_handle_tuple_routing(ApplyExecutionData\n> *edata,\n> Assert(partrelinfo != NULL);\n> partrel = partrelinfo->ri_RelationDesc;\n> \n> + /* Check for supported relkind. */\n> + CheckSubscriptionRelkind(partrel->rd_rel->relkind,\n> + relmapentry-\n> >remoterel.nspname, relmapentry->remoterel.relname);\n> +\n> /*\n> * To perform any of the operations below, the tuple must\n> match the\n> * partition's rowtype. Convert if needed or just copy, using\n> a dedicated\n> \n> \n> Regards,\n> Shi yu\n\nI have verified that the current patch handles the attaching of new\npartitions to the target partitioned table by throwing an error on\nattempt to insert into a foreign table inside the logical replication\nworker. I have refactored the code to minimize cache lookups, but I am\nyet to write the tests for this. See the attached patch for the\nrefactored version.",
"msg_date": "Tue, 01 Nov 2022 02:02:42 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with\n foreign children"
},
{
"msg_contents": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com> writes:\n> [ v2-0001-check-relkind-of-subscription-target-recursively.patch ]\n\nHmm. I like Shi yu's way better (formal patch attached). Checking\nat CREATE/ALTER SUBSCRIPTION is much more complicated, and it's really\ninsufficient, because what if someone adds a new partition after\nsetting up the subscription?\n\nI get the argument about it being a useful check for simple mistakes,\nbut I don't entirely buy that argument, because I think there are\npotential use-cases that it'd disallow needlessly. Imagine a\npartitioned table that receives replication updates, but only into\nthe \"current\" partition; older partitions are basically static.\nNow suppose you'd like to offload some of that old seldom-used data\nto another server, and make those partitions into foreign tables\nso you can still access it at need. Such a setup will work perfectly\nfine today, but this patch would break it.\n\nSo I think what we want is to check when we identify the partition.\nMaybe Shi yu missed a place or two to check, but I verified that the\nattached stops the crash.\n\nThere'd still be value in refactoring to avoid premature lookup\nof the namespace name, but that's just micro-optimization.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 01 Nov 2022 15:10:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "Since we're getting pretty close to the next set of back-branch releases,\nI went ahead and pushed a minimal fix along the lines suggested by Shi Yu.\n(I realized that there's a second ExecFindPartition call in worker.c that\nalso needs a check.) We still can at leisure think about refactoring\nCheckSubscriptionRelkind to avoid unnecessary lookups. I think that\nis something we should do only in HEAD; it'll just be a marginal savings,\nnot worth the risks of API changes in stable branches. The other loose\nend is whether to worry about a regression test case. I'm inclined not\nto bother. The only thing that isn't getting exercised is the actual\nereport, which probably isn't in great need of routine testing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Nov 2022 12:37:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with foreign\n children"
},
{
"msg_contents": "On Wed, 2022-11-02 at 12:37 -0400, Tom Lane wrote:\n> Since we're getting pretty close to the next set of back-branch\n> releases,\n> I went ahead and pushed a minimal fix along the lines suggested by\n> Shi Yu.\n> (I realized that there's a second ExecFindPartition call in worker.c\n> that\n> also needs a check.) We still can at leisure think about refactoring\n> CheckSubscriptionRelkind to avoid unnecessary lookups. I think that\n> is something we should do only in HEAD; it'll just be a marginal\n> savings,\n> not worth the risks of API changes in stable branches. The other\n> loose\n> end is whether to worry about a regression test case. I'm inclined\n> not\n> to bother. The only thing that isn't getting exercised is the actual\n> ereport, which probably isn't in great need of routine testing.\n> \n> regards, tom lane\n\nI agree that early checks limit some of the functionality that was\navailable before, so I guess the only way to preserve it is to do only\nthe last-minute checks after routing, fair enough. As for the\nrefactoring of the premature lookup, I have done some work on that in\nthe previous patch, I think we can just use it. Attached a separate\npatch with the refactoring.",
"msg_date": "Thu, 03 Nov 2022 12:45:08 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on logical replication to partitioned table with\n foreign children"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm working to extract independently useful bits from my AIO work, to reduce\nthe size of that patchset. This is one of those pieces.\n\nIn workloads that extend relations a lot, we end up being extremely contended\non the relation extension lock. We've attempted to address that to some degree\nby using batching, which helps, but only so much.\n\nThe fundamental issue, in my opinion, is that we do *way* too much while\nholding the relation extension lock. We acquire a victim buffer, if that\nbuffer is dirty, we potentially flush the WAL, then write out that\nbuffer. Then we zero out the buffer contents. Call smgrextend().\n\nMost of that work does not actually need to happen while holding the relation\nextension lock. As far as I can tell, the minimum that needs to be covered by\nthe extension lock is the following:\n\n1) call smgrnblocks()\n2) insert buffer[s] into the buffer mapping table at the location returned by\n smgrnblocks\n3) mark buffer[s] as IO_IN_PROGRESS\n\n\n1) obviously has to happen with the relation extension lock held because\notherwise we might miss another relation extension. 2+3) need to happen with\nthe lock held, because otherwise another backend not doing an extension could\nread the block before we're done extending, dirty it, write it out, and then\nhave it overwritten by the extending backend.\n\n\nThe reason we currently do so much work while holding the relation extension\nlock is that bufmgr.c does not know about the relation lock and that relation\nextension happens entirely within ReadBuffer* - there's no way to use a\nnarrower scope for the lock.\n\n\nMy fix for that is to add a dedicated function for extending relations, that\ncan acquire the extension lock if necessary (callers can tell it to skip that,\ne.g., when initially creating an init fork). This routine is called by\nReadBuffer_common() when P_NEW is passed in, to provide backward\ncompatibility.\n\n\nTo be able to acquire victim buffers outside of the extension lock, victim\nbuffers are now acquired separately from inserting the new buffer mapping\nentry. Victim buffer are pinned, cleaned, removed from the buffer mapping\ntable and marked invalid. Because they are pinned, clock sweeps in other\nbackends won't return them. This is done in a new function,\n[Local]BufferAlloc().\n\nThis is similar to Yuri's patch at [0], but not that similar to earlier or\nlater approaches in that thread. I don't really understand why that thread\nwent on to ever more complicated approaches, when the basic approach shows\nplenty gains, with no issues around the number of buffer mapping entries that\ncan exist etc.\n\n\n\nOther interesting bits I found:\n\na) For workloads that [mostly] fit into s_b, the smgwrite() that BufferAlloc()\n does, nearly doubles the amount of writes. First the kernel ends up writing\n out all the zeroed out buffers after a while, then when we write out the\n actual buffer contents.\n\n The best fix for that seems to be to optionally use posix_fallocate() to\n reserve space, without dirtying pages in the kernel page cache. However, it\n looks like that's only beneficial when extending by multiple pages at once,\n because it ends up causing one filesystem-journal entry for each extension\n on at least some filesystems.\n\n I added 'smgrzeroextend()' that can extend by multiple blocks, without the\n caller providing a buffer to write out. When extending by 8 or more blocks,\n posix_fallocate() is used if available, otherwise pg_pwritev_with_retry() is\n used to extend the file.\n\n\nb) I found that is quite beneficial to bulk-extend the relation with\n smgrextend() even without concurrency. The reason for that is the primarily\n the aforementioned dirty buffers that our current extension method causes.\n\n One bit that stumped me for quite a while is to know how much to extend the\n relation by. RelationGetBufferForTuple() drives the decision whether / how\n much to bulk extend purely on the contention on the extension lock, which\n obviously does not work for non-concurrent workloads.\n\n After quite a while I figured out that we actually have good information on\n how much to extend by, at least for COPY /\n heap_multi_insert(). heap_multi_insert() can compute how much space is\n needed to store all tuples, and pass that on to\n RelationGetBufferForTuple().\n\n For that to be accurate we need to recompute that number whenever we use an\n already partially filled page. That's not great, but doesn't appear to be a\n measurable overhead.\n\n\nc) Contention on the FSM and the pages returned by it is a serious bottleneck\n after a) and b).\n\n The biggest issue is that the current bulk insertion logic in hio.c enters\n all but one of the new pages into the freespacemap. That will immediately\n cause all the other backends to contend on the first few pages returned the\n FSM, and cause contention on the FSM pages itself.\n\n I've, partially, addressed that by using the information about the required\n number of pages from b). Whether we bulk insert or not, the number of pages\n we know we're going to need for one heap_multi_insert() don't need to be\n added to the FSM - we're going to use them anyway.\n\n I've stashed the number of free blocks in the BulkInsertState for now, but\n I'm not convinced that that's the right place.\n\n If I revert just this part, the \"concurrent COPY into unlogged table\"\n benchmark goes from ~240 tps to ~190 tps.\n\n\n Even after that change the FSM is a major bottleneck. Below I included\n benchmarks showing this by just removing the use of the FSM, but I haven't\n done anything further about it. The contention seems to be both from\n updating the FSM, as well as thundering-herd like symptoms from accessing\n the FSM.\n\n The update part could likely be addressed to some degree with a batch\n update operation updating the state for multiple pages.\n\n The access part could perhaps be addressed by adding an operation that gets\n a page and immediately marks it as fully used, so other backends won't also\n try to access it.\n\n\n\nd) doing\n\t\t/* new buffers are zero-filled */\n\t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n\n under the extension lock is surprisingly expensive on my two socket\n workstation (but much less noticable on my laptop).\n\n If I move the MemSet back under the extension lock, the \"concurrent COPY\n into unlogged table\" benchmark goes from ~240 tps to ~200 tps.\n\n\ne) When running a few benchmarks for this email, I noticed that there was a\n sharp performance dropoff for the patched code for a pgbench -S -s100 on a\n database with 1GB s_b, start between 512 and 1024 clients. This started with\n the patch only acquiring one buffer partition lock at a time. Lots of\n debugging ensued, resulting in [3].\n\n The problem isn't actually related to the change, it just makes it more\n visible, because the \"lock chains\" between two partitions reduce the\n average length of the wait queues substantially, by distribution them\n between more partitions. [3] has a reproducer that's entirely independent\n of this patchset.\n\n\n\n\nBulk extension acquires a number of victim buffers, acquires the extension\nlock, inserts the buffers into the buffer mapping table and marks them as\nio-in-progress, calls smgrextend and releases the extension lock. After that\nbuffer[s] are locked (depending on mode and an argument indicating the number\nof blocks to be locked), and TerminateBufferIO() is called.\n\nThis requires two new pieces of infrastructure:\n\nFirst, pinning multiple buffers opens up the obvious danger that we might run\nof non-pinned buffers. I added LimitAdditional[Local]Pins() that allows each\nbackend to pin a proportional share of buffers (although always allowing one,\nas we do today).\n\nSecond, having multiple IOs in progress at the same time isn't possible with\nthe InProgressBuf mechanism. I added a ResourceOwnerRememberBufferIO() etc to\ndeal with that instead. I like that this ends up removing a lot of\nAbortBufferIO() calls from the loops of various aux processes (now released\ninside ReleaseAuxProcessResources()).\n\nIn very extreme workloads (single backend doing a pgbench -S -s 100 against a\ns_b=64MB database) the memory allocations triggered by StartBufferIO() are\n*just about* visible, not sure if that's worth worrying about - we do such\nallocations for the much more common pinning of buffers as well.\n\n\nThe new [Bulk]ExtendSharedRelationBuffered() currently have both a Relation\nand a SMgrRelation argument, requiring at least one of them to be set. The\nreason for that is on the one hand that LockRelationForExtension() requires a\nrelation and on the other hand, redo routines typically don't have a Relation\naround (recovery doesn't require an extension lock). That's not pretty, but\nseems a tad better than the ReadBufferExtended() vs\nReadBufferWithoutRelcache() mess.\n\n\n\nI've done a fair bit of benchmarking of this patchset. For COPY it comes out\nahead everywhere. It's possible that there's a very small regression for\nextremly IO miss heavy workloads, more below.\n\n\nserver \"base\" configuration:\n\nmax_wal_size=150GB\nshared_buffers=24GB\nhuge_pages=on\nautovacuum=0\nbackend_flush_after=2MB\nmax_connections=5000\nwal_buffers=128MB\nwal_segment_size=1GB\n\nbenchmark: pgbench running COPY into a single table. pgbench -t is set\naccording to the client count, so that the same amount of data is inserted.\nThis is done oth using small files ([1], ringbuffer not effective, no dirty\ndata to write out within the benchmark window) and a bit larger files ([2],\nlots of data to write out due to ringbuffer).\n\nTo make it a fair comparison HEAD includes the lwlock-waitqueue fix as well.\n\ns_b=24GB\n\ntest: unlogged_small_files, format: text, files: 1024, 9015MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 58.63 207 50.22 242 54.35 224\n2 32.67 372 25.82 472 27.30 446\n4 22.53 540 13.30 916 14.33 851\n8 15.14 804 7.43 1640 7.48 1632\n16 14.69 829 4.79 2544 4.50 2718\n32 15.28 797 4.41 2763 3.32 3710\n64 15.34 794 5.22 2334 3.06 4061\n128 15.49 786 4.97 2452 3.13 3926\n256 15.85 768 5.02 2427 3.26 3769\n512 16.02 760 5.29 2303 3.54 3471\n\ntest: logged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 68.18 178 59.41 205 63.43 192\n2 39.71 306 33.10 368 34.99 348\n4 27.26 446 19.75 617 20.09 607\n8 18.84 646 12.86 947 12.68 962\n16 15.96 763 9.62 1266 8.51 1436\n32 15.43 789 8.20 1486 7.77 1579\n64 16.11 756 8.91 1367 8.90 1383\n128 16.41 742 10.00 1218 9.74 1269\n256 17.33 702 11.91 1023 10.89 1136\n512 18.46 659 14.07 866 11.82 1049\n\ntest: unlogged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 63.27s 192 56.14 217 59.25 205\n2 40.17s 303 29.88 407 31.50 386\n4 27.57s 442 16.16 754 17.18 709\n8 21.26s 573 11.89 1025 11.09 1099\n16 21.25s 573 10.68 1141 10.22 1192\n32 21.00s 580 10.72 1136 10.35 1178\n64 20.64s 590 10.15 1200 9.76 1249\n128 skipped\n256 skipped\n512 skipped\n\ntest: logged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 71.89s 169 65.57 217 69.09 69.09\n2 47.36s 257 36.22 407 38.71 38.71\n4 33.10s 368 21.76 754 22.78 22.78\n8 26.62s 457 15.89 1025 15.30 15.30\n16 24.89s 489 17.08 1141 15.20 15.20\n32 25.15s 484 17.41 1136 16.14 16.14\n64 26.11s 466 17.89 1200 16.76 16.76\n128 skipped\n256 skipped\n512 skipped\n\n\nJust to see how far it can be pushed, with binary format we can now get to\nnearly 6GB/s into a table when disabling the FSM - note the 2x difference\nbetween patch and patch+no-fsm at 32 clients.\n\ntest: unlogged_small_files, format: binary, files: 1024, 9508MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 34.14\t357\t28.04\t434\t29.46\t413\n2 22.67\t537\t14.42\t845\t14.75\t826\n4 16.63\t732\t7.62\t1599\t7.69\t1587\n8 13.48\t904\t4.36\t2795\t4.13\t2959\n16 14.37\t848\t3.78\t3224\t2.74\t4493\n32 14.79\t823\t4.20\t2902\t2.07\t5974\n64 14.76\t825\t5.03\t2423\t2.21\t5561\n128 14.95\t815\t4.36\t2796\t2.30\t5343\n256 15.18\t802\t4.31\t2828\t2.49\t4935\n512 15.41\t790\t4.59\t2656\t2.84\t4327\n\n\ns_b=4GB\n\ntest: unlogged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1\t62.55\t194\t54.22\t224\n2\t37.11\t328\t28.94\t421\n4\t25.97\t469\t16.42\t742\n8\t20.01\t609\t11.92\t1022\n16\t19.55\t623\t11.05\t1102\n32\t19.34\t630\t11.27\t1081\n64\t19.07\t639\t12.04\t1012\n128\t19.22\t634\t12.27\t993\n256\t19.34\t630\t12.28\t992\n512\t19.60\t621\t11.74\t1038\n\ntest: logged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1\t71.71\t169\t63.63\t191\n2\t46.93\t259\t36.31\t335\n4\t30.37\t401\t22.41\t543\n8\t22.86\t533\t16.90\t721\n16\t20.18\t604\t14.07\t866\n32\t19.64\t620\t13.06\t933\n64\t19.71\t618\t15.08\t808\n128\t19.95\t610\t15.47\t787\n256\t20.48\t595\t16.53\t737\n512\t21.56\t565\t16.86\t722\n\ntest: unlogged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1\t62.65\t194\t55.74\t218\n2\t40.25\t302\t29.45\t413\n4\t27.37\t445\t16.26\t749\n8\t22.07\t552\t11.75\t1037\n16\t21.29\t572\t10.64\t1145\n32\t20.98\t580\t10.70\t1139\n64\t20.65\t590\t10.21\t1193\n128\tskipped\n256\tskipped\n512\tskipped\n\ntest: logged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1\t71.72\t169\t65.12\t187\n2\t46.46\t262\t35.74\t341\n4\t32.61\t373\t21.60\t564\n8\t26.69\t456\t16.30\t747\n16\t25.31\t481\t17.00\t716\n32\t24.96\t488\t17.47\t697\n64\t26.05\t467\t17.90\t680\n128\tskipped\n256\tskipped\n512\tskipped\n\n\ntest: unlogged_small_files, format: binary, files: 1024, 9505MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1\t37.62\t323\t32.77\t371\n2\t28.35\t429\t18.89\t645\n4\t20.87\t583\t12.18\t1000\n8\t19.37\t629\t10.38\t1173\n16\t19.41\t627\t10.36\t1176\n32\t18.62\t654\t11.04\t1103\n64\t18.33\t664\t11.89\t1024\n128\t18.41\t661\t11.91\t1023\n256\t18.52\t658\t12.10\t1007\n512\t18.78\t648\t11.49\t1060\n\n\nbenchmark: Run a pgbench -S workload with scale 100, so it doesn't fit into\ns_b, thereby exercising BufferAlloc()'s buffer replacement path heavily.\n\n\nThe run-to-run variance on my workstation is high for this workload (both\nbefore/after my changes). I also found that the ramp-up time at higher client\ncounts is very significant:\nprogress: 2.1 s, 5816.8 tps, lat 1.835 ms stddev 4.450, 0 failed\nprogress: 3.0 s, 666729.4 tps, lat 5.755 ms stddev 16.753, 0 failed\nprogress: 4.0 s, 899260.1 tps, lat 3.619 ms stddev 41.108, 0 failed\n...\n\nOne would need to run pgbench for impractically long to make that effect\nvanish.\n\nMy not great solution for these was to run with -T21 -P5 and use the best 5s\nas the tps.\n\n\ns_b=1GB\n\ttps\t\ttps\nclients\tmaster\t\tpatched\n1 49541 48805\n2\t 85342 90010\n4 167340 168918\n8 308194 303222\n16 524294 523678\n32 649516 649100\n64 932547 937702\n128 908249 906281\n256 856496 903979\n512 764254 934702\n1024 653886 925113\n2048 569695 917262\n4096 526782 903258\n\n\ns_b=128MB:\n\ttps\t\ttps\nclients\tmaster\t\tpatched\n1 40407 39854\n2 73180 72252\n4 143334 140860\n8 240982 245331\n16 429265 420810\n32 544593 540127\n64 706408 726678\n128 713142 718087\n256 611030 695582\n512 552751 686290\n1024 508248 666370\n2048 474108 656735\n4096 448582 633040\n\n\nAs there might be a small regression at the smallest end, I ran a more extreme\nversion of the above. Using a pipelined pgbench -S, with a single client, for\nlonger. With s_b=8MB.\n\nTo further reduce noise I pinned the server to one cpu, the client to another\nand disabled turbo mode on the CPU.\n\nmaster \"total\" tps: 61.52\nmaster \"best 5s\" tps: 61.8\npatch \"total\" tps: 61.20\npatch \"best 5s\" tps: 61.4\n\nHardly conclusive, but it does look like there's a small effect. It could be\ncode layout or such.\n\nMy guess however is that it's the resource owner for in-progress IO that I\nadded - that adds an additional allocation inside the resowner machinery. I\ncommented those out (that's obviously incorrect!) just to see whether that\nchanges anything:\n\nno-resowner \"total\" tps: 62.03\nno-resowner \"best 5s\" tps: 62.2\n\nSo it looks like indeed, it's the resowner. I am a bit surprised, because\nobviously we already use that mechanism for pins, which obviously is more\nfrequent.\n\nI'm not sure it's worth worrying about - this is a pretty absurd workload. But\nif we decide it is, I can think of a few ways to address this. E.g.:\n\n- We could preallocate an initial element inside the ResourceArray struct, so\n that a newly created resowner won't need to allocate immediately\n- We could only use resowners if there's more than one IO in progress at the\n same time - but I don't like that idea much\n- We could try to store the \"in-progress\"-ness of a buffer inside the 'bufferpin'\n resowner entry - on 64bit system there's plenty space for that. But on 32bit systems...\n\n\nThe patches here aren't fully polished (as will be evident). But they should\nbe more than good enough to discuss whether this is a sane direction.\n\nGreetings,\n\nAndres Freund\n\n[0] https://postgr.es/m/3b108afd19fa52ed20c464a69f64d545e4a14772.camel%40postgrespro.ru\n[1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n[2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n[3] https://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de",
"msg_date": "Fri, 28 Oct 2022 19:54:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On Sat, Oct 29, 2022 at 8:24 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I'm working to extract independently useful bits from my AIO work, to reduce\n> the size of that patchset. This is one of those pieces.\n\nThanks a lot for this great work. There are 12 patches in this thread,\nI believe each of these patches is trying to solve separate problems\nand can be reviewed and get committed separately, am I correct?\n\n> In workloads that extend relations a lot, we end up being extremely contended\n> on the relation extension lock. We've attempted to address that to some degree\n> by using batching, which helps, but only so much.\n\nYes, I too have observed this in the past for parallel inserts in CTAS\nwork - https://www.postgresql.org/message-id/CALj2ACW9BUoFqWkmTSeHjFD-W7_00s3orqSvtvUk%2BKD2H7ZmRg%40mail.gmail.com.\nTackling bulk relation extension problems will unblock the parallel\ninserts (in CTAS, COPY) work I believe.\n\n> The fundamental issue, in my opinion, is that we do *way* too much while\n> holding the relation extension lock. We acquire a victim buffer, if that\n> buffer is dirty, we potentially flush the WAL, then write out that\n> buffer. Then we zero out the buffer contents. Call smgrextend().\n>\n> Most of that work does not actually need to happen while holding the relation\n> extension lock. As far as I can tell, the minimum that needs to be covered by\n> the extension lock is the following:\n>\n> 1) call smgrnblocks()\n> 2) insert buffer[s] into the buffer mapping table at the location returned by\n> smgrnblocks\n> 3) mark buffer[s] as IO_IN_PROGRESS\n\nMakes sense.\n\nI will try to understand and review each patch separately.\n\nFirstly, 0001 avoids extra loop over waiters and looks a reasonable\nchange, some comments on the patch:\n\n1)\n+ int lwWaiting; /* 0 if not waiting, 1 if on\nwaitlist, 2 if\n+ * waiting to be woken */\nUse macros instead of hard-coded values for better readability?\n\n#define PROC_LW_LOCK_NOT_WAITING 0\n#define PROC_LW_LOCK_ON_WAITLIST 1\n#define PROC_LW_LOCK_WAITING_TO_BE_WOKEN 2\n\n2) Missing initialization of lwWaiting to 0 or the macro in twophase.c\nand proc.c.\n proc->lwWaiting = false;\n MyProc->lwWaiting = false;\n\n3)\n+ proclist_delete(&lock->waiters, MyProc->pgprocno, lwWaitLink);\n+ found = true;\nI guess 'found' is a bit meaningless here as we are doing away with\nthe proclist_foreach_modify loop. We can directly use\nMyProc->lwWaiting == 1 and remove 'found'.\n\n4)\n if (!MyProc->lwWaiting)\n if (!proc->lwWaiting)\nCan we modify the above conditions in lwlock.c to MyProc->lwWaiting !=\n1 or PROC_LW_LOCK_ON_WAITLIST or the macro?\n\n5) Is there any specific test case that I can see benefit of this\npatch? If yes, can you please share it here?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 29 Oct 2022 18:33:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-29 18:33:53 +0530, Bharath Rupireddy wrote:\n> On Sat, Oct 29, 2022 at 8:24 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > I'm working to extract independently useful bits from my AIO work, to reduce\n> > the size of that patchset. This is one of those pieces.\n> \n> Thanks a lot for this great work. There are 12 patches in this thread,\n> I believe each of these patches is trying to solve separate problems\n> and can be reviewed and get committed separately, am I correct?\n\nMostly, yes.\n\nFor 0001 I already started\nhttps://www.postgresql.org/message-id/20221027165914.2hofzp4cvutj6gin%40awork3.anarazel.de\nto discuss the specific issue.\n\nWe don't strictly need v1-0002-aio-Add-some-error-checking-around-pinning.patch\nbut I did find it useful.\n\nv1-0012-bufmgr-debug-Add-PrintBuffer-Desc.patch is not used in the patch\nseries, but I found it quite useful when debugging issues with the patch. A\nheck of a lot easier to interpret page flags when they can be printed.\n\nI also think there's some architectural questions that'll influence the number\nof patches. E.g. I'm not convinced\nv1-0010-heapam-Add-num_pages-to-RelationGetBufferForTuple.patch is quite the\nright spot to track which additional pages should be used. It could very well\ninstead be alongside ->smgr_targblock. Possibly the best path would instead\nbe to return the additional pages explicitly to callers of\nRelationGetBufferForTuple, but RelationGetBufferForTuple does a bunch of work\naround pinning that potentially would need to be repeated in heap_multi_insert().\n\n\n\n> > In workloads that extend relations a lot, we end up being extremely contended\n> > on the relation extension lock. We've attempted to address that to some degree\n> > by using batching, which helps, but only so much.\n> \n> Yes, I too have observed this in the past for parallel inserts in CTAS\n> work - https://www.postgresql.org/message-id/CALj2ACW9BUoFqWkmTSeHjFD-W7_00s3orqSvtvUk%2BKD2H7ZmRg%40mail.gmail.com.\n> Tackling bulk relation extension problems will unblock the parallel\n> inserts (in CTAS, COPY) work I believe.\n\nYea. There's a lot of places the current approach ended up being a bottleneck.\n\n\n> Firstly, 0001 avoids extra loop over waiters and looks a reasonable\n> change, some comments on the patch:\n\n> 1)\n> + int lwWaiting; /* 0 if not waiting, 1 if on\n> waitlist, 2 if\n> + * waiting to be woken */\n> Use macros instead of hard-coded values for better readability?\n> \n> #define PROC_LW_LOCK_NOT_WAITING 0\n> #define PROC_LW_LOCK_ON_WAITLIST 1\n> #define PROC_LW_LOCK_WAITING_TO_BE_WOKEN 2\n\nYea - this was really more of a prototype patch - I noted that we'd want to\nuse defines for this in\nhttps://www.postgresql.org/message-id/20221027165914.2hofzp4cvutj6gin%40awork3.anarazel.de\n\n\n> 3)\n> + proclist_delete(&lock->waiters, MyProc->pgprocno, lwWaitLink);\n> + found = true;\n> I guess 'found' is a bit meaningless here as we are doing away with\n> the proclist_foreach_modify loop. We can directly use\n> MyProc->lwWaiting == 1 and remove 'found'.\n\nWe can rename it, but I think we still do need it, it's easier to analyze the\nlogic if the relevant check happens on a value from while we held the wait\nlist lock. Probably should do the reset inside the locked section as well.\n\n\n> 4)\n> if (!MyProc->lwWaiting)\n> if (!proc->lwWaiting)\n> Can we modify the above conditions in lwlock.c to MyProc->lwWaiting !=\n> 1 or PROC_LW_LOCK_ON_WAITLIST or the macro?\n\nI think it's better to check it's 0, rather than just != 1.\n\n\n> 5) Is there any specific test case that I can see benefit of this\n> patch? If yes, can you please share it here?\n\nYep, see the other thread, there's a pretty easy case there. You can also see\nit at extreme client counts with a pgbench -S against a cluster with a smaller\nshared_buffers. But the difference is not huge before something like 2048-4096\nclients, and then it only occurs occasionally (because you need to end up with\nmost connections waiting for one of the partitions). So the test case from the\nother thread is a lot better.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 29 Oct 2022 11:39:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On Sat, 29 Oct 2022 at 08:24, Andres Freund <andres@anarazel.de> wrote:\n>\n> The patches here aren't fully polished (as will be evident). But they should\n> be more than good enough to discuss whether this is a sane direction.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nf2857af485a00ab5dbfa2c83af9d83afe4378239 ===\n=== applying patch\n./v1-0001-wip-lwlock-fix-quadratic-behaviour-with-very-long.patch\npatching file src/include/storage/proc.h\nHunk #1 FAILED at 217.\n1 out of 1 hunk FAILED -- saving rejects to file src/include/storage/proc.h.rej\npatching file src/backend/storage/lmgr/lwlock.c\nHunk #1 succeeded at 988 with fuzz 2 (offset 1 line).\nHunk #2 FAILED at 1047.\nHunk #3 FAILED at 1076.\nHunk #4 FAILED at 1104.\nHunk #5 FAILED at 1117.\nHunk #6 FAILED at 1141.\nHunk #7 FAILED at 1775.\nHunk #8 FAILED at 1790.\n7 out of 8 hunks FAILED -- saving rejects to file\nsrc/backend/storage/lmgr/lwlock.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3993.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:52:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-06 11:52:04 +0530, vignesh C wrote:\n> On Sat, 29 Oct 2022 at 08:24, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > The patches here aren't fully polished (as will be evident). But they should\n> > be more than good enough to discuss whether this is a sane direction.\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch.\n\nThanks for letting me now. Updated version attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 9 Jan 2023 18:07:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 15:08, Andres Freund <andres@anarazel.de> wrote:\n> Thanks for letting me now. Updated version attached.\n\nI'm not too sure I've qualified for giving a meaningful design review\nhere, but I have started looking at the patches and so far only made\nit as far as 0006.\n\nI noted down the following while reading:\n\nv2-0001:\n\n1. BufferCheckOneLocalPin needs a header comment\n\nv2-0002:\n\n2. The following comment and corresponding code to release the\nextension lock has been moved now.\n\n/*\n* Release the file-extension lock; it's now OK for someone else to extend\n* the relation some more.\n*/\n\nI think it's worth detailing out why it's fine to release the\nextension lock in the new location. You've added detail to the commit\nmessage but I think you need to do the same in the comments too.\n\nv2-0003\n\n3. FileFallocate() and FileZero() should likely document what they\nreturn, i.e zero on success and non-zero on failure.\n\n4. I'm not quite clear on why you've modified FileGetRawDesc() to call\nFileAccess() twice.\n\nv2-0004:\n\n5. Is it worth having two versions of PinLocalBuffer() one to adjust\nthe usage count and one that does not? Couldn't the version that does\nnot adjust the count skip doing pg_atomic_read_u32()?\n\nv2-0005\nv2-0006\n\nDavid\n\n\n",
"msg_date": "Fri, 20 Jan 2023 13:40:55 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "I'll continue reviewing this, but here's some feedback on the first two \npatches:\n\nv2-0001-aio-Add-some-error-checking-around-pinning.patch:\n\nI wonder if the extra assertion in LockBufHdr() is worth the overhead. \nIt won't add anything without assertions, of course, but still. No \nobjections if you think it's worth it.\n\n\nv2-0002-hio-Release-extension-lock-before-initializing-pa.patch:\n\nLooks as far as it goes. It's a bit silly that we use RBM_ZERO_AND_LOCK, \nwhich zeroes the page, and then we call PageInit to zero the page again. \nRBM_ZERO_AND_LOCK only zeroes the page if it wasn't in the buffer cache \npreviously, but with P_NEW, that is always true.\n\n- Heikki\n\n\n\n",
"msg_date": "Fri, 10 Feb 2023 18:38:50 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "> v2-0005-bufmgr-Acquire-and-clean-victim-buffer-separately.patch\nThis can be applied separately from the rest of the patches, which is \nnice. Some small comments on it:\n\n* Needs a rebase, it conflicted slightly with commit f30d62c2fc.\n\n* GetVictimBuffer needs a comment to explain what it does. In \nparticular, mention that it returns a buffer that's pinned and known \n!BM_TAG_VALID.\n\n* I suggest renaming 'cur_buf' and other such local variables in \nGetVictimBufffer to just 'buf'. 'cur' prefix suggests that there is some \nother buffer involved too, but there is no 'prev' or 'next' or 'other' \nbuffer. The old code called it just 'buf' too, and before this patch it \nactually was a bit confusing because there were two buffers involved. \nBut with this patch, GetVictimBuffer only deals with one buffer at a time.\n\n* This FIXME:\n\n> \t\t/* OK, do the I/O */\n> \t\t/* FIXME: These used the wrong smgr before afaict? */\n> \t\t{\n> \t\t\tSMgrRelation smgr = smgropen(BufTagGetRelFileLocator(&buf_hdr->tag),\n> \t\t\t\t\t\t\t\t\t\t InvalidBackendId);\n> \n> \t\t\tTRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(buf_hdr->tag.forkNum,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t buf_hdr->tag.blockNum,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.spcOid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.dbOid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.relNumber);\n> \n> \t\t\tFlushBuffer(buf_hdr, smgr, IOOBJECT_RELATION, io_context);\n> \t\t\tLWLockRelease(content_lock);\n> \n> \t\t\tScheduleBufferTagForWriteback(&BackendWritebackContext,\n> \t\t\t\t\t\t\t\t\t\t &buf_hdr->tag);\n> \n> \t\t\tTRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_DONE(buf_hdr->tag.forkNum,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t buf_hdr->tag.blockNum,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.spcOid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.dbOid,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.relNumber);\n> \t\t}\n\nI believe that was intentional. The probes previously reported the block \nand relation whose read *caused* the eviction. It was not just the smgr \nbut also the blockNum and forkNum that referred to the block that was \nbeing read. There's another pair of probe points, \nTRACE_POSTGRESQL_BUFFER_FLUSH_START/DONE, inside FlushBuffer that \nindicate the page that is being flushed.\n\nI see that reporting the evicted page is more convenient with this \npatch, otherwise you'd need to pass the smgr and blocknum of the page \nthat's being read to InvalidateVictimBuffer(). IMHO you can just remove \nthese probe points. We don't need to bend over backwards to maintain \nspecific probe points.\n\n* InvalidateVictimBuffer reads the buffer header with an atomic read op, \njust to check if BM_TAG_VALID is set. If it's not, it does nothing \n(except for a few Asserts). But the caller has already read the buffer \nheader. Consider refactoring it so that the caller checks VM_TAG_VALID, \nand only calls InvalidateVictimBuffer if it's set, saving one atomic \nread in InvalidateVictimBuffer. I think it would be just as readable, so \nno loss there. I doubt the atomic read makes any measurable performance \ndifference, but it looks redundant.\n\n* I don't understand this comment:\n\n> \t/*\n> \t * Clear out the buffer's tag and flags and usagecount. We must do\n> \t * this to ensure that linear scans of the buffer array don't think\n> \t * the buffer is valid.\n> \t *\n> \t * XXX: This is a pre-existing comment I just moved, but isn't it\n> \t * entirely bogus with regard to the tag? We can't do anything with\n> \t * the buffer without taking BM_VALID / BM_TAG_VALID into\n> \t * account. Likely doesn't matter because we're already dirtying the\n> \t * cacheline, but still.\n> \t *\n> \t */\n> \tClearBufferTag(&buf_hdr->tag);\n> \tbuf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);\n> \tUnlockBufHdr(buf_hdr, buf_state);\n\nWhat exactly is wrong with clearing the tag? What does dirtying the \ncacheline have to do with the correctness here?\n\n* pgindent\n\n- Heikki\n\n\n",
"msg_date": "Sat, 11 Feb 2023 23:03:56 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 23:03:56 +0200, Heikki Linnakangas wrote:\n> > v2-0005-bufmgr-Acquire-and-clean-victim-buffer-separately.patch\n> This can be applied separately from the rest of the patches, which is nice.\n> Some small comments on it:\n\nThanks for looking at these!\n\n\n> * Needs a rebase, it conflicted slightly with commit f30d62c2fc.\n\nWill work on that.\n\n\n> * GetVictimBuffer needs a comment to explain what it does. In particular,\n> mention that it returns a buffer that's pinned and known !BM_TAG_VALID.\n\nWill add.\n\n\n> * I suggest renaming 'cur_buf' and other such local variables in\n> GetVictimBufffer to just 'buf'. 'cur' prefix suggests that there is some\n> other buffer involved too, but there is no 'prev' or 'next' or 'other'\n> buffer. The old code called it just 'buf' too, and before this patch it\n> actually was a bit confusing because there were two buffers involved. But\n> with this patch, GetVictimBuffer only deals with one buffer at a time.\n\nHm. Yea. I probably ended up with these names because initially\nGetVictimBuffer() wasn't a separate function, and I indeed constantly got\nconfused by which buffer was referenced.\n\n\n> * This FIXME:\n>\n> > \t\t/* OK, do the I/O */\n> > \t\t/* FIXME: These used the wrong smgr before afaict? */\n> > \t\t{\n> > \t\t\tSMgrRelation smgr = smgropen(BufTagGetRelFileLocator(&buf_hdr->tag),\n> > \t\t\t\t\t\t\t\t\t\t InvalidBackendId);\n> >\n> > \t\t\tTRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(buf_hdr->tag.forkNum,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t buf_hdr->tag.blockNum,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.spcOid,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.dbOid,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.relNumber);\n> >\n> > \t\t\tFlushBuffer(buf_hdr, smgr, IOOBJECT_RELATION, io_context);\n> > \t\t\tLWLockRelease(content_lock);\n> >\n> > \t\t\tScheduleBufferTagForWriteback(&BackendWritebackContext,\n> > \t\t\t\t\t\t\t\t\t\t &buf_hdr->tag);\n> >\n> > \t\t\tTRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_DONE(buf_hdr->tag.forkNum,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t buf_hdr->tag.blockNum,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.spcOid,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.dbOid,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t smgr->smgr_rlocator.locator.relNumber);\n> > \t\t}\n>\n> I believe that was intentional. The probes previously reported the block and\n> relation whose read *caused* the eviction. It was not just the smgr but also\n> the blockNum and forkNum that referred to the block that was being read.\n\nYou're probably right. It's certainly not understandable from our docs\nthough:\n\n <row>\n <entry><literal>buffer-write-dirty-start</literal></entry>\n <entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>\n <entry>Probe that fires when a server process begins to write a dirty\n buffer. (If this happens often, it implies that\n <xref linkend=\"guc-shared-buffers\"/> is too\n small or the background writer control parameters need adjustment.)\n arg0 and arg1 contain the fork and block numbers of the page.\n arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs\n identifying the relation.</entry>\n </row>\n\n\n> I see that reporting the evicted page is more convenient with this patch,\n> otherwise you'd need to pass the smgr and blocknum of the page that's being\n> read to InvalidateVictimBuffer(). IMHO you can just remove these probe\n> points. We don't need to bend over backwards to maintain specific probe\n> points.\n\nAgreed.\n\n\n> * InvalidateVictimBuffer reads the buffer header with an atomic read op,\n> just to check if BM_TAG_VALID is set.\n\nIt's not a real atomic op, in the sense of being special instruction. It does\nforce the compiler to actually read from memory, but that's it.\n\nBut you're right, even that is unnecessary. I think it ended up that way\nbecause I also wanted the full buf_hdr, and it seemed somewhat error prone to\npass in both.\n\n\n\n> * I don't understand this comment:\n>\n> > \t/*\n> > \t * Clear out the buffer's tag and flags and usagecount. We must do\n> > \t * this to ensure that linear scans of the buffer array don't think\n> > \t * the buffer is valid.\n> > \t *\n> > \t * XXX: This is a pre-existing comment I just moved, but isn't it\n> > \t * entirely bogus with regard to the tag? We can't do anything with\n> > \t * the buffer without taking BM_VALID / BM_TAG_VALID into\n> > \t * account. Likely doesn't matter because we're already dirtying the\n> > \t * cacheline, but still.\n> > \t *\n> > \t */\n> > \tClearBufferTag(&buf_hdr->tag);\n> > \tbuf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);\n> > \tUnlockBufHdr(buf_hdr, buf_state);\n>\n> What exactly is wrong with clearing the tag? What does dirtying the\n> cacheline have to do with the correctness here?\n\nThere's nothing wrong with clearing out the tag, but I don't think it's a hard\nrequirement today, and certainly not for the reason stated above.\n\nValidity of the buffer isn't determined by the tag, it's determined by\nBM_VALID (or, if you interpret valid more widely, BM_TAG_VALID).\n\nWithout either having pinned the buffer, or holding the buffer header\nspinlock, the tag can change at any time. And code like DropDatabaseBuffers()\nknows that, and re-checks the the tag after locking the buffer header\nspinlock.\n\nAfaict, there'd be no correctness issue with removing the\nClearBufferTag(). There would be an efficiency issue though, because when\nencountering an invalid buffer, we'd unnecessarily enter InvalidateBuffer(),\nwhich'd find that BM_[TAG_]VALID isn't set, and not to anything.\n\n\nEven though it's not a correctness issue, it seems to me that\nDropRelationsAllBuffers() etc ought to check if the buffer is BM_TAG_VALID,\nbefore doing anything further. Particularly in DropRelationsAllBuffers(), the\ncheck we do for each buffer isn't cheap. Doing it for buffers that don't even\nhave a tag seems .. not smart.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 13:36:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 2023-02-11 13:36:51 -0800, Andres Freund wrote:\n> Even though it's not a correctness issue, it seems to me that\n> DropRelationsAllBuffers() etc ought to check if the buffer is BM_TAG_VALID,\n> before doing anything further. Particularly in DropRelationsAllBuffers(), the\n> check we do for each buffer isn't cheap. Doing it for buffers that don't even\n> have a tag seems .. not smart.\n\nThere's a small regression for a single relation, but after that it's a clear\nbenefit.\n\n32GB shared buffers, empty. The test creates N new relations and then rolls\nback.\n\n\t\ttps\t\ttps\nnum relations\tHEAD\t\tprecheck\n1\t\t46.11\t\t45.22\n2\t\t43.24\t\t44.87\n4\t\t35.14\t\t44.20\n8\t\t28.72\t\t42.79\n\nI don't understand the regression at 1, TBH. I think it must be a random code\nlayout issue, because the same pre-check in DropRelationBuffers() (exercised\nvia TRUNCATE of a newly created relation), shows a tiny speedup.\n\n\n",
"msg_date": "Sat, 11 Feb 2023 14:04:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-10 18:38:50 +0200, Heikki Linnakangas wrote:\n> I'll continue reviewing this, but here's some feedback on the first two\n> patches:\n> \n> v2-0001-aio-Add-some-error-checking-around-pinning.patch:\n> \n> I wonder if the extra assertion in LockBufHdr() is worth the overhead. It\n> won't add anything without assertions, of course, but still. No objections\n> if you think it's worth it.\n\nIt's so easy to get confused about local/non-local buffers, that I think it is\nuseful. I think we really need to consider cleaning up the separation\nfurther. Having half the code for local buffers in bufmgr.c and the other half\nin localbuf.c, without a scheme that I can recognize, is not a good scheme.\n\n\nIt bothers me somewhat ConditionalLockBufferForCleanup() silently accepts\nmultiple pins by the current backend. That's the right thing for\ne.g. heap_page_prune_opt(), but for something like lazy_scan_heap() it's\nnot. And yes, I did encounter a bug hidden by that when making vacuumlazy use\nAIO as part of that patchset. That's why I made BufferCheckOneLocalPin()\nexternally visible.\n\n\n\n> v2-0002-hio-Release-extension-lock-before-initializing-pa.patch:\n> \n> Looks as far as it goes. It's a bit silly that we use RBM_ZERO_AND_LOCK,\n> which zeroes the page, and then we call PageInit to zero the page again.\n> RBM_ZERO_AND_LOCK only zeroes the page if it wasn't in the buffer cache\n> previously, but with P_NEW, that is always true.\n\nIt is quite silly, and it shows up noticably in profiles. The zeroing is\ndefinitely needed in other places calling PageInit(), though. I suspect we\nshould have a PageInitZeroed() or such, that asserts the page is zero, but\notherwise skips it.\n\nSeems independent enough from this series, that I'd probably tackle it\nseparately? If you prefer, I'm ok with adding a patch to this series instead,\nthough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 14:20:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-20 13:40:55 +1300, David Rowley wrote:\n> On Tue, 10 Jan 2023 at 15:08, Andres Freund <andres@anarazel.de> wrote:\n> > Thanks for letting me now. Updated version attached.\n> \n> I'm not too sure I've qualified for giving a meaningful design review\n> here, but I have started looking at the patches and so far only made\n> it as far as 0006.\n\nThanks!\n\n\n> I noted down the following while reading:\n> \n> v2-0001:\n> \n> 1. BufferCheckOneLocalPin needs a header comment\n> \n> v2-0002:\n> \n> 2. The following comment and corresponding code to release the\n> extension lock has been moved now.\n> \n> /*\n> * Release the file-extension lock; it's now OK for someone else to extend\n> * the relation some more.\n> */\n> \n> I think it's worth detailing out why it's fine to release the\n> extension lock in the new location. You've added detail to the commit\n> message but I think you need to do the same in the comments too.\n\nWill do.\n\n\n> v2-0003\n> \n> 3. FileFallocate() and FileZero() should likely document what they\n> return, i.e zero on success and non-zero on failure.\n\nI guess I just tried to fit in with the rest of the file :)\n\n\n> 4. I'm not quite clear on why you've modified FileGetRawDesc() to call\n> FileAccess() twice.\n\nI do not have the faintest idea what happened there... Will fix.\n\n\n> v2-0004:\n> \n> 5. Is it worth having two versions of PinLocalBuffer() one to adjust\n> the usage count and one that does not? Couldn't the version that does\n> not adjust the count skip doing pg_atomic_read_u32()?\n\nI think it'd be nicer to just move the read inside the if\n(adjust_usagecount). That way the rest of the function doesn't have to be\nduplicated.\n\n\nThanks,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 14:25:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 14:25:06 -0800, Andres Freund wrote:\n> On 2023-01-20 13:40:55 +1300, David Rowley wrote:\n> > v2-0004:\n> > \n> > 5. Is it worth having two versions of PinLocalBuffer() one to adjust\n> > the usage count and one that does not? Couldn't the version that does\n> > not adjust the count skip doing pg_atomic_read_u32()?\n> \n> I think it'd be nicer to just move the read inside the if\n> (adjust_usagecount). That way the rest of the function doesn't have to be\n> duplicated.\n\nAh, no, we need it for the return value. No current users of\n PinLocalBuffer(adjust_usagecount = false)\nneed the return value, but I don't think that's necessarily the case.\n\nI'm somewhat inclined to not duplicate it, but if you think it's worth it,\nI'll do that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Feb 2023 14:33:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 11/02/2023 23:36, Andres Freund wrote:\n> On 2023-02-11 23:03:56 +0200, Heikki Linnakangas wrote:\n>> * I don't understand this comment:\n>>\n>>> \t/*\n>>> \t * Clear out the buffer's tag and flags and usagecount. We must do\n>>> \t * this to ensure that linear scans of the buffer array don't think\n>>> \t * the buffer is valid.\n>>> \t *\n>>> \t * XXX: This is a pre-existing comment I just moved, but isn't it\n>>> \t * entirely bogus with regard to the tag? We can't do anything with\n>>> \t * the buffer without taking BM_VALID / BM_TAG_VALID into\n>>> \t * account. Likely doesn't matter because we're already dirtying the\n>>> \t * cacheline, but still.\n>>> \t *\n>>> \t */\n>>> \tClearBufferTag(&buf_hdr->tag);\n>>> \tbuf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);\n>>> \tUnlockBufHdr(buf_hdr, buf_state);\n>>\n>> What exactly is wrong with clearing the tag? What does dirtying the\n>> cacheline have to do with the correctness here?\n> \n> There's nothing wrong with clearing out the tag, but I don't think it's a hard\n> requirement today, and certainly not for the reason stated above.\n> \n> Validity of the buffer isn't determined by the tag, it's determined by\n> BM_VALID (or, if you interpret valid more widely, BM_TAG_VALID).\n> \n> Without either having pinned the buffer, or holding the buffer header\n> spinlock, the tag can change at any time. And code like DropDatabaseBuffers()\n> knows that, and re-checks the the tag after locking the buffer header\n> spinlock.\n> \n> Afaict, there'd be no correctness issue with removing the\n> ClearBufferTag(). There would be an efficiency issue though, because when\n> encountering an invalid buffer, we'd unnecessarily enter InvalidateBuffer(),\n> which'd find that BM_[TAG_]VALID isn't set, and not to anything.\n\nOkay, gotcha.\n\n> Even though it's not a correctness issue, it seems to me that\n> DropRelationsAllBuffers() etc ought to check if the buffer is BM_TAG_VALID,\n> before doing anything further. Particularly in DropRelationsAllBuffers(), the\n> check we do for each buffer isn't cheap. Doing it for buffers that don't even\n> have a tag seems .. not smart.\n\nDepends on what percentage of buffers are valid, I guess. If all buffers \nare valid, checking BM_TAG_VALID first would lose. In practice, I doubt \nit makes any measurable difference either way.\n\nSince we're micro-optimizing, I noticed that \nBufTagMatchesRelFileLocator() compares the fields in order \"spcOid, \ndbOid, relNumber\". Before commit 82ac34db20, we used \nRelFileLocatorEqual(), which has this comment:\n\n/*\n * Note: RelFileLocatorEquals and RelFileLocatorBackendEquals compare \nrelNumber\n * first since that is most likely to be different in two unequal\n * RelFileLocators. It is probably redundant to compare spcOid if the \nother\n * fields are found equal, but do it anyway to be sure. Likewise for \nchecking\n * the backend ID in RelFileLocatorBackendEquals.\n */\n\nSo we lost that micro-optimization. Should we reorder the checks in \nBufTagMatchesRelFileLocator()?\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 21 Feb 2023 17:16:33 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "> v2-0006-bufmgr-Support-multiple-in-progress-IOs-by-using-.patch\n\nThis looks straightforward. My only concern is that it changes the order \nthat things happen at abort. Currently, AbortBufferIO() is called very \nearly in AbortTransaction(), and this patch moves it much later. I don't \nsee any immediate problems from that, but it feels scary.\n\n\n> @@ -2689,7 +2685,6 @@ InitBufferPoolAccess(void)\n> static void\n> AtProcExit_Buffers(int code, Datum arg)\n> {\n> -\tAbortBufferIO();\n> \tUnlockBuffers();\n> \n> \tCheckForBufferLeaks();\n\nHmm, do we call AbortTransaction() and ResourceOwnerRelease() on \nelog(FATAL)? Do we need to worry about that?\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 21 Feb 2023 17:40:31 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "> v2-0007-bufmgr-Move-relation-extension-handling-into-Bulk.patch\n\n> +static BlockNumber\n> +BulkExtendSharedRelationBuffered(Relation rel,\n> +\t\t\t\t\t\t\t\t SMgrRelation smgr,\n> +\t\t\t\t\t\t\t\t bool skip_extension_lock,\n> +\t\t\t\t\t\t\t\t char relpersistence,\n> +\t\t\t\t\t\t\t\t ForkNumber fork, ReadBufferMode mode,\n> +\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> +\t\t\t\t\t\t\t\t uint32 *num_pages,\n> +\t\t\t\t\t\t\t\t uint32 num_locked_pages,\n> +\t\t\t\t\t\t\t\t Buffer *buffers)\n\nUgh, that's a lot of arguments, some are inputs and some are outputs. I \ndon't have any concrete suggestions, but could we simplify this somehow? \nNeeds a comment at least.\n\n> v2-0008-Convert-a-few-places-to-ExtendRelationBuffered.patch\n\n> diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n> index de1427a1e0e..1810f7ebfef 100644\n> --- a/src/backend/access/brin/brin.c\n> +++ b/src/backend/access/brin/brin.c\n> @@ -829,9 +829,11 @@ brinbuild(Relation heap, Relation index, IndexInfo *indexInfo)\n> \t * whole relation will be rolled back.\n> \t */\n> \n> -\tmeta = ReadBuffer(index, P_NEW);\n> +\tmeta = ExtendRelationBuffered(index, NULL, true,\n> +\t\t\t\t\t\t\t\t index->rd_rel->relpersistence,\n> +\t\t\t\t\t\t\t\t MAIN_FORKNUM, RBM_ZERO_AND_LOCK,\n> +\t\t\t\t\t\t\t\t NULL);\n> \tAssert(BufferGetBlockNumber(meta) == BRIN_METAPAGE_BLKNO);\n> -\tLockBuffer(meta, BUFFER_LOCK_EXCLUSIVE);\n> \n> \tbrin_metapage_init(BufferGetPage(meta), BrinGetPagesPerRange(index),\n> \t\t\t\t\t BRIN_CURRENT_VERSION);\n\nSince we're changing the API anyway, how about introducing a new \nfunction for this case where we extend the relation but we what block \nnumber we're going to get? This pattern of using P_NEW and asserting the \nresult has always felt awkward to me.\n\n> -\t\tbuf = ReadBuffer(irel, P_NEW);\n> +\t\tbuf = ExtendRelationBuffered(irel, NULL, false,\n> +\t\t\t\t\t\t\t\t\t irel->rd_rel->relpersistence,\n> +\t\t\t\t\t\t\t\t\t MAIN_FORKNUM, RBM_ZERO_AND_LOCK,\n> +\t\t\t\t\t\t\t\t\t NULL);\n\nThese new calls are pretty verbose, compared to ReadBuffer(rel, P_NEW). \nI'd suggest something like:\n\nbuf = ExtendBuffer(rel);\n\nDo other ReadBufferModes than RBM_ZERO_AND_LOCK make sense with \nExtendRelationBuffered?\n\nIs it ever possible to call this without a relcache entry? WAL redo \nfunctions do that with ReadBuffer, but they only extend a relation \nimplicitly, by replay a record for a particular block.\n\nAll of the above comments are around the BulkExtendRelationBuffered() \nfunction's API. That needs a closer look and a more thought-out design \nto make it nice. Aside from that, this approach seems valid.\n\n- Heikki\n\n\n\n",
"msg_date": "Tue, 21 Feb 2023 18:18:02 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 2023-Feb-21, Heikki Linnakangas wrote:\n\n> > +static BlockNumber\n> > +BulkExtendSharedRelationBuffered(Relation rel,\n> > +\t\t\t\t\t\t\t\t SMgrRelation smgr,\n> > +\t\t\t\t\t\t\t\t bool skip_extension_lock,\n> > +\t\t\t\t\t\t\t\t char relpersistence,\n> > +\t\t\t\t\t\t\t\t ForkNumber fork, ReadBufferMode mode,\n> > +\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> > +\t\t\t\t\t\t\t\t uint32 *num_pages,\n> > +\t\t\t\t\t\t\t\t uint32 num_locked_pages,\n> > +\t\t\t\t\t\t\t\t Buffer *buffers)\n> \n> Ugh, that's a lot of arguments, some are inputs and some are outputs. I\n> don't have any concrete suggestions, but could we simplify this somehow?\n> Needs a comment at least.\n\nYeah, I noticed this too. I think it would be easy enough to add a new\nstruct that can be passed as a pointer, which can be stack-allocated\nby the caller, and which holds the input arguments that are common to\nboth functions, as is sensible.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Update: super-fast reaction on the Postgres bugs mailing list. The report\nwas acknowledged [...], and a fix is under discussion.\nThe wonders of open-source !\"\n https://twitter.com/gunnarmorling/status/1596080409259003906\n\n\n",
"msg_date": "Tue, 21 Feb 2023 17:33:31 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-21 17:40:31 +0200, Heikki Linnakangas wrote:\n> > v2-0006-bufmgr-Support-multiple-in-progress-IOs-by-using-.patch\n> \n> This looks straightforward. My only concern is that it changes the order\n> that things happen at abort. Currently, AbortBufferIO() is called very early\n> in AbortTransaction(), and this patch moves it much later. I don't see any\n> immediate problems from that, but it feels scary.\n\nYea, it does feel a bit awkward. But I suspect it's actually the right\nthing. We've not even adjusted the transaction state at the point we're\ncalling AbortBufferIO(). And AbortBufferIO() will sometimes allocate memory\nfor a WARNING, which conceivably could fail - although I don't think that's a\nparticularly realistic scenario due to TransactionAbortContext (I guess you\ncould have a large error context stack or such).\n\n\nMedium term I think we need to move a lot more of the error handling into\nresowners. Having a dozen+ places with their own choreographed sigsetjmp()\nrecovery blocks is error prone as hell. Not to mention tedious.\n\n\n> > @@ -2689,7 +2685,6 @@ InitBufferPoolAccess(void)\n> > static void\n> > AtProcExit_Buffers(int code, Datum arg)\n> > {\n> > -\tAbortBufferIO();\n> > \tUnlockBuffers();\n> > \tCheckForBufferLeaks();\n> \n> Hmm, do we call AbortTransaction() and ResourceOwnerRelease() on\n> elog(FATAL)? Do we need to worry about that?\n\nWe have before_shmem_exit() callbacks that should protect against\nthat. InitPostgres() registers ShutdownPostgres(), and\nCreateAuxProcessResourceOwner() registers\nReleaseAuxProcessResourcesCallback().\n\n\nI think we'd already be in trouble if we didn't reliably end up doing resowner\ncleanup during process exit.\n\nPerhaps ResourceOwnerCreate()/ResourceOwnerDelete() should maintain a list of\n\"active\" resource owners and have a before-exit callback that ensures the list\nis empty and PANICs if not? Better a crash restart than hanging because we\ndidn't release some shared resource.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Feb 2023 10:42:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-21 18:18:02 +0200, Heikki Linnakangas wrote:\n> > v2-0007-bufmgr-Move-relation-extension-handling-into-Bulk.patch\n> \n> > +static BlockNumber\n> > +BulkExtendSharedRelationBuffered(Relation rel,\n> > +\t\t\t\t\t\t\t\t SMgrRelation smgr,\n> > +\t\t\t\t\t\t\t\t bool skip_extension_lock,\n> > +\t\t\t\t\t\t\t\t char relpersistence,\n> > +\t\t\t\t\t\t\t\t ForkNumber fork, ReadBufferMode mode,\n> > +\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> > +\t\t\t\t\t\t\t\t uint32 *num_pages,\n> > +\t\t\t\t\t\t\t\t uint32 num_locked_pages,\n> > +\t\t\t\t\t\t\t\t Buffer *buffers)\n> \n> Ugh, that's a lot of arguments, some are inputs and some are outputs. I\n> don't have any concrete suggestions, but could we simplify this somehow?\n> Needs a comment at least.\n\nYea. I think this is the part of the patchset I like the least.\n\nThe ugliest bit is accepting both rel and smgr. The background to that is that\nwe need the relation oid to acquire the extension lock. But during crash\nrecovery we don't have that - which is fine, because we don't need the\nextension lock.\n\nWe could have two different of functions, but that ends up a mess as well, as\nwe've seen in other cases.\n\n\n> > v2-0008-Convert-a-few-places-to-ExtendRelationBuffered.patch\n> \n> > diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n> > index de1427a1e0e..1810f7ebfef 100644\n> > --- a/src/backend/access/brin/brin.c\n> > +++ b/src/backend/access/brin/brin.c\n> > @@ -829,9 +829,11 @@ brinbuild(Relation heap, Relation index, IndexInfo *indexInfo)\n> > \t * whole relation will be rolled back.\n> > \t */\n> > -\tmeta = ReadBuffer(index, P_NEW);\n> > +\tmeta = ExtendRelationBuffered(index, NULL, true,\n> > +\t\t\t\t\t\t\t\t index->rd_rel->relpersistence,\n> > +\t\t\t\t\t\t\t\t MAIN_FORKNUM, RBM_ZERO_AND_LOCK,\n> > +\t\t\t\t\t\t\t\t NULL);\n> > \tAssert(BufferGetBlockNumber(meta) == BRIN_METAPAGE_BLKNO);\n> > -\tLockBuffer(meta, BUFFER_LOCK_EXCLUSIVE);\n> > \tbrin_metapage_init(BufferGetPage(meta), BrinGetPagesPerRange(index),\n> > \t\t\t\t\t BRIN_CURRENT_VERSION);\n> \n> Since we're changing the API anyway, how about introducing a new function\n> for this case where we extend the relation but we what block number we're\n> going to get? This pattern of using P_NEW and asserting the result has\n> always felt awkward to me.\n\nTo me it always felt like a code smell that some code insists on specific\ngetting specific block numbers with P_NEW. I guess it's ok for things like\nbuilding a new index, but outside of that it feels wrong.\n\nThe first case I found just now is revmap_physical_extend(). Which seems to\nextend the relation while holding an lwlock. Ugh.\n\nMaybe ExtendRelationBufferedTo() or something like that? With a big comment\nsaying that users of it are likely bad ;)\n\n\n> > -\t\tbuf = ReadBuffer(irel, P_NEW);\n> > +\t\tbuf = ExtendRelationBuffered(irel, NULL, false,\n> > +\t\t\t\t\t\t\t\t\t irel->rd_rel->relpersistence,\n> > +\t\t\t\t\t\t\t\t\t MAIN_FORKNUM, RBM_ZERO_AND_LOCK,\n> > +\t\t\t\t\t\t\t\t\t NULL);\n> \n> These new calls are pretty verbose, compared to ReadBuffer(rel, P_NEW). I'd\n> suggest something like:\n\nI guess. Not sure if it's worth optimizing for brevity all that much here -\nthere's not that many places extending relations. Several places end up with\nless code, actually , because they don't need to care about the extension lock\nthemselves anymore. I think an ExtendBuffer() that doesn't mention the fork,\netc, ends up being more confusing than helpful.\n\n\n> buf = ExtendBuffer(rel);\n\nWithout the relation in the name it just seems confusing to me - the extension\nisn't \"restricted\" to shared_buffers. ReadBuffer() isn't great as a name\neither, but it makes a bit more sense at least, it reads into a buffer. And\nit's a vastly more frequent operation, so optimizing for density is worth it.\n\n\n> Do other ReadBufferModes than RBM_ZERO_AND_LOCK make sense with\n> ExtendRelationBuffered?\n\nHm. That's a a good point. Probably not. Perhaps it could be useful to support\nRBM_NORMAL as well? But even if, it'd just be a lock release away if we always\nused RBM_ZERO_AND_LOCK.\n\n\n> Is it ever possible to call this without a relcache entry? WAL redo\n> functions do that with ReadBuffer, but they only extend a relation\n> implicitly, by replay a record for a particular block.\n\nI think we should use it for crash recovery as well, but the patch doesn't\nyet. We have some gnarly code there, see the loop using P_NEW in\nXLogReadBufferExtended(). Extending the file one-by-one is a lot more\nexpensive than doing it in bulk.\n\n\n> All of the above comments are around the BulkExtendRelationBuffered()\n> function's API. That needs a closer look and a more thought-out design to\n> make it nice. Aside from that, this approach seems valid.\n\nThanks for looking! I agree that it can stand a fair bit of polishing...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Feb 2023 11:22:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 10/28/22 9:54 PM, Andres Freund wrote:\n> b) I found that is quite beneficial to bulk-extend the relation with\n> smgrextend() even without concurrency. The reason for that is the primarily\n> the aforementioned dirty buffers that our current extension method causes.\n>\n> One bit that stumped me for quite a while is to know how much to extend the\n> relation by. RelationGetBufferForTuple() drives the decision whether / how\n> much to bulk extend purely on the contention on the extension lock, which\n> obviously does not work for non-concurrent workloads.\n>\n> After quite a while I figured out that we actually have good information on\n> how much to extend by, at least for COPY /\n> heap_multi_insert(). heap_multi_insert() can compute how much space is\n> needed to store all tuples, and pass that on to\n> RelationGetBufferForTuple().\n>\n> For that to be accurate we need to recompute that number whenever we use an\n> already partially filled page. That's not great, but doesn't appear to be a\n> measurable overhead.\nSome food for thought: I think it's also completely fine to extend any \nrelation over a certain size by multiple blocks, regardless of \nconcurrency. E.g. 10 extra blocks on an 80MB relation is 0.1%. I don't \nhave a good feel for what algorithm would make sense here; maybe \nsomething along the lines of extend = max(relpages / 2048, 128); if \nextend < 8 extend = 1; (presumably extending by just a couple extra \npages doesn't help much without concurrency).\n\n\n",
"msg_date": "Tue, 21 Feb 2023 15:00:15 -0600",
"msg_from": "Jim Nasby <nasbyj@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-21 15:00:15 -0600, Jim Nasby wrote:\n> On 10/28/22 9:54 PM, Andres Freund wrote:\n> > b) I found that is quite beneficial to bulk-extend the relation with\n> > smgrextend() even without concurrency. The reason for that is the primarily\n> > the aforementioned dirty buffers that our current extension method causes.\n> > \n> > One bit that stumped me for quite a while is to know how much to extend the\n> > relation by. RelationGetBufferForTuple() drives the decision whether / how\n> > much to bulk extend purely on the contention on the extension lock, which\n> > obviously does not work for non-concurrent workloads.\n> > \n> > After quite a while I figured out that we actually have good information on\n> > how much to extend by, at least for COPY /\n> > heap_multi_insert(). heap_multi_insert() can compute how much space is\n> > needed to store all tuples, and pass that on to\n> > RelationGetBufferForTuple().\n> > \n> > For that to be accurate we need to recompute that number whenever we use an\n> > already partially filled page. That's not great, but doesn't appear to be a\n> > measurable overhead.\n> Some food for thought: I think it's also completely fine to extend any\n> relation over a certain size by multiple blocks, regardless of concurrency.\n> E.g. 10 extra blocks on an 80MB relation is 0.1%. I don't have a good feel\n> for what algorithm would make sense here; maybe something along the lines of\n> extend = max(relpages / 2048, 128); if extend < 8 extend = 1; (presumably\n> extending by just a couple extra pages doesn't help much without\n> concurrency).\n\nI previously implemented just that. It's not easy to get right. You can easily\nend up with several backends each extending the relation by quite a bit, at\nthe same time (or you re-introduce contention). Which can end up with a\nrelation being larger by a bunch if data loading stops at some point.\n\nWe might want that as well at some point, but the approach implemented in the\npatchset is precise and thus always a win, and thus should be the baseline.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Feb 2023 13:12:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "\nOn 2/21/23 3:12 PM, Andres Freund wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> Hi,\n>\n> On 2023-02-21 15:00:15 -0600, Jim Nasby wrote:\n>> Some food for thought: I think it's also completely fine to extend any\n>> relation over a certain size by multiple blocks, regardless of concurrency.\n>> E.g. 10 extra blocks on an 80MB relation is 0.1%. I don't have a good feel\n>> for what algorithm would make sense here; maybe something along the lines of\n>> extend = max(relpages / 2048, 128); if extend < 8 extend = 1; (presumably\n>> extending by just a couple extra pages doesn't help much without\n>> concurrency).\n> I previously implemented just that. It's not easy to get right. You can easily\n> end up with several backends each extending the relation by quite a bit, at\n> the same time (or you re-introduce contention). Which can end up with a\n> relation being larger by a bunch if data loading stops at some point.\n>\n> We might want that as well at some point, but the approach implemented in the\n> patchset is precise and thus always a win, and thus should be the baseline.\nYeah, what I was suggesting would only make sense when there *wasn't* \ncontention.\n\n\n",
"msg_date": "Tue, 21 Feb 2023 16:49:37 -0600",
"msg_from": "Jim Nasby <nasbyj@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 21/02/2023 21:22, Andres Freund wrote:\n> On 2023-02-21 18:18:02 +0200, Heikki Linnakangas wrote:\n>> Is it ever possible to call this without a relcache entry? WAL redo\n>> functions do that with ReadBuffer, but they only extend a relation\n>> implicitly, by replay a record for a particular block.\n> \n> I think we should use it for crash recovery as well, but the patch doesn't\n> yet. We have some gnarly code there, see the loop using P_NEW in\n> XLogReadBufferExtended(). Extending the file one-by-one is a lot more\n> expensive than doing it in bulk.\n\nHmm, XLogReadBufferExtended() could use smgrzeroextend() to fill the \ngap, and then call ExtendRelationBuffered for the target page. Or the \nnew ExtendRelationBufferedTo() function that you mentioned.\n\nIn the common case that you load a lot of data to a relation extending \nit, and then crash, the WAL replay would still extend the relation one \npage at a time, which is inefficient. Changing that would need bigger \nchanges, to WAL-log the relation extension as a separate WAL record, for \nexample. I don't think we need to solve that right now, it can be \naddressed separately later.\n\n- Heikki\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:18:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 11:18:57 +0200, Heikki Linnakangas wrote:\n> On 21/02/2023 21:22, Andres Freund wrote:\n> > On 2023-02-21 18:18:02 +0200, Heikki Linnakangas wrote:\n> > > Is it ever possible to call this without a relcache entry? WAL redo\n> > > functions do that with ReadBuffer, but they only extend a relation\n> > > implicitly, by replay a record for a particular block.\n> > \n> > I think we should use it for crash recovery as well, but the patch doesn't\n> > yet. We have some gnarly code there, see the loop using P_NEW in\n> > XLogReadBufferExtended(). Extending the file one-by-one is a lot more\n> > expensive than doing it in bulk.\n> \n> Hmm, XLogReadBufferExtended() could use smgrzeroextend() to fill the gap,\n> and then call ExtendRelationBuffered for the target page. Or the new\n> ExtendRelationBufferedTo() function that you mentioned.\n\nI don't think it's safe to just use smgrzeroextend(). Without the page-level\ninterlock from the buffer entry, a concurrent reader can read/write the\nextended portion of the relation, while we're extending. That can lead to\nloosing writes.\n\nIt also turns out that just doing smgrzeroextend(), without filling s_b, is\noften bad for performance, because it may cause reads when trying to fill the\nbuffers. Although hopefully that's less of an issue during WAL replay, due to\nREGBUF_WILL_INIT.\n\n\n> In the common case that you load a lot of data to a relation extending it,\n> and then crash, the WAL replay would still extend the relation one page at a\n> time, which is inefficient. Changing that would need bigger changes, to\n> WAL-log the relation extension as a separate WAL record, for example. I\n> don't think we need to solve that right now, it can be addressed separately\n> later.\n\nYea, that seems indeed something for later.\n\nThere's several things we could do without adding WAL logging of relation\nextension themselves.\n\nOne relatively easy thing would be to add information about the number of\nblocks we're extending by to XLOG_HEAP2_MULTI_INSERT records. Compared to the\ninsertions themselves that'd barely be noticable.\n\nA slightly more complicated thing would be to peek ahead in the WAL (we have\ninfrastructure for that now) and extend by enough for the next few relation\nextensions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:31:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-21 11:22:26 -0800, Andres Freund wrote:\n> On 2023-02-21 18:18:02 +0200, Heikki Linnakangas wrote:\n> > Do other ReadBufferModes than RBM_ZERO_AND_LOCK make sense with\n> > ExtendRelationBuffered?\n> \n> Hm. That's a a good point. Probably not. Perhaps it could be useful to support\n> RBM_NORMAL as well? But even if, it'd just be a lock release away if we always\n> used RBM_ZERO_AND_LOCK.\n\nThere's a fair number of callers using RBM_NORMAL, via ReadBuffer[Extended]()\nright now. While some of them are trivial to convert, others aren't (e.g.,\nbrin_getinsertbuffer()). So I'm inclined to continue allowing RBM_NORMAL.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 14:07:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 21/02/2023 18:33, Alvaro Herrera wrote:\n> On 2023-Feb-21, Heikki Linnakangas wrote:\n> \n>>> +static BlockNumber\n>>> +BulkExtendSharedRelationBuffered(Relation rel,\n>>> +\t\t\t\t\t\t\t\t SMgrRelation smgr,\n>>> +\t\t\t\t\t\t\t\t bool skip_extension_lock,\n>>> +\t\t\t\t\t\t\t\t char relpersistence,\n>>> +\t\t\t\t\t\t\t\t ForkNumber fork, ReadBufferMode mode,\n>>> +\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n>>> +\t\t\t\t\t\t\t\t uint32 *num_pages,\n>>> +\t\t\t\t\t\t\t\t uint32 num_locked_pages,\n>>> +\t\t\t\t\t\t\t\t Buffer *buffers)\n>>\n>> Ugh, that's a lot of arguments, some are inputs and some are outputs. I\n>> don't have any concrete suggestions, but could we simplify this somehow?\n>> Needs a comment at least.\n> \n> Yeah, I noticed this too. I think it would be easy enough to add a new\n> struct that can be passed as a pointer, which can be stack-allocated\n> by the caller, and which holds the input arguments that are common to\n> both functions, as is sensible.\n\nWe also do this in freespace.c and visibilitymap.c:\n\n /* Extend as needed. */\n while (fsm_nblocks_now < fsm_nblocks)\n {\n PageSetChecksumInplace((Page) pg.data, fsm_nblocks_now);\n\n smgrextend(reln, FSM_FORKNUM, fsm_nblocks_now,\n pg.data, false);\n fsm_nblocks_now++;\n }\n\nWe could use the new smgrzeroextend function here. But it would be \nbetter to go through the buffer cache, because after this, the last \nblock, at 'fsm_nblocks', will be read with ReadBuffer() and modified.\n\nWe could use BulkExtendSharedRelationBuffered() to extend the relation \nand keep the last page locked, but the \nBulkExtendSharedRelationBuffered() signature doesn't allow that. It can \nreturn the first N pages locked, but there's no way to return the *last* \npage locked.\n\nPerhaps we should decompose this function into several function calls. \nSomething like:\n\n/* get N victim buffers, pinned and !BM_VALID */\nbuffers = BeginExtendRelation(int npages);\n\nLockRelationForExtension(rel)\n\n/* Insert buffers into buffer table */\nfirst_blk = smgrnblocks()\nfor (blk = first_blk; blk < last_blk; blk++)\n MapNewBuffer(blk, buffers[i])\n\n/* extend the file on disk */\nsmgrzeroextend();\n\nUnlockRelationForExtension(rel)\n\nfor (blk = first_blk; blk < last_blk; blk++)\n{\n memset(BufferGetPage(buffers[i]), 0,\n FinishNewBuffer(buffers[i])\n /* optionally lock the buffer */\n LockBuffer(buffers[i]);\n}\n\nThat's a lot more verbose, of course, but gives the callers the \nflexibility. And might even be more readable than one function call with \nlots of arguments.\n\nThis would expose the concept of a buffer that's mapped but marked as \nIO-in-progress outside bufmgr.c. On one hand, maybe that's exposing \ndetails that shouldn't be exposed. On the other hand, it might come \nhandy. Instead of RBM_ZERO_AND_LOCK mode, for example, it might be handy \nto have a function that returns an IO-in-progress buffer that you can \ninitialize any way you want.\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 27 Feb 2023 18:06:22 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-27 18:06:22 +0200, Heikki Linnakangas wrote:\n> We also do this in freespace.c and visibilitymap.c:\n> \n> /* Extend as needed. */\n> while (fsm_nblocks_now < fsm_nblocks)\n> {\n> PageSetChecksumInplace((Page) pg.data, fsm_nblocks_now);\n> \n> smgrextend(reln, FSM_FORKNUM, fsm_nblocks_now,\n> pg.data, false);\n> fsm_nblocks_now++;\n> }\n> \n> We could use the new smgrzeroextend function here. But it would be better to\n> go through the buffer cache, because after this, the last block, at\n> 'fsm_nblocks', will be read with ReadBuffer() and modified.\n\nI doubt it's a particularly crucial thing to optimize.\n\nBut, uh, isn't this code racy? Because this doesn't go through shared buffers,\nthere's no IO_IN_PROGRESS interlocking against a concurrent reader. We know\nthat writing pages isn't atomic vs readers. So another connection could\nconnection could see the new relation size, but a read might return a\npartially written state of the page. Which then would cause checksum\nfailures. And even worse, I think it could lead to loosing a write, if the\nconcurrent connection writes out a page.\n\n\n> We could use BulkExtendSharedRelationBuffered() to extend the relation and\n> keep the last page locked, but the BulkExtendSharedRelationBuffered()\n> signature doesn't allow that. It can return the first N pages locked, but\n> there's no way to return the *last* page locked.\n\nWe can't rely on bulk extending a, potentially, large number of pages in one\ngo anyway (since we might not be allowed to pin that many pages). So I don't\nthink requiring locking the last page is a really viable API.\n\nI think for this case I'd just just use the ExtendRelationTo() API we were\ndiscussing nearby. Compared to the cost of reducing syscalls / filesystem\noverhead to extend the relation, the cost of the buffer mapping lookup does't\nseem significant. That's different in e.g. the hio.c case, because there we\nneed a buffer with free space, and concurrent activity could otherwise fill up\nthe buffer before we can lock it again.\n\n\nI had started hacking on ExtendRelationTo() that when I saw problems with the\nexisting code that made me hesitate:\nhttps://postgr.es/m/20230223010147.32oir7sb66slqnjk%40awork3.anarazel.de\n\n\n> Perhaps we should decompose this function into several function calls.\n> Something like:\n> \n> /* get N victim buffers, pinned and !BM_VALID */\n> buffers = BeginExtendRelation(int npages);\n> \n> LockRelationForExtension(rel)\n> \n> /* Insert buffers into buffer table */\n> first_blk = smgrnblocks()\n> for (blk = first_blk; blk < last_blk; blk++)\n> MapNewBuffer(blk, buffers[i])\n> \n> /* extend the file on disk */\n> smgrzeroextend();\n> \n> UnlockRelationForExtension(rel)\n> \n> for (blk = first_blk; blk < last_blk; blk++)\n> {\n> memset(BufferGetPage(buffers[i]), 0,\n> FinishNewBuffer(buffers[i])\n> /* optionally lock the buffer */\n> LockBuffer(buffers[i]);\n> }\n> \n> That's a lot more verbose, of course, but gives the callers the flexibility.\n> And might even be more readable than one function call with lots of\n> arguments.\n\nTo me this seems like a quite bad idea. The amount of complexity this would\nexpose all over the tree is substantial. Which would also make it harder to\nfurther improve relation extension at a later date. It certainly shouldn't be\nthe default interface. And I'm not sure I see a promisung usecase.\n\n\n> This would expose the concept of a buffer that's mapped but marked as\n> IO-in-progress outside bufmgr.c. On one hand, maybe that's exposing details\n> that shouldn't be exposed. On the other hand, it might come handy. Instead\n> of RBM_ZERO_AND_LOCK mode, for example, it might be handy to have a function\n> that returns an IO-in-progress buffer that you can initialize any way you\n> want.\n\nI'd much rather encapsulate that in additional functions, or perhaps a\ncallback that can make decisions about what to do.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Feb 2023 13:45:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-21 17:33:31 +0100, Alvaro Herrera wrote:\n> On 2023-Feb-21, Heikki Linnakangas wrote:\n> \n> > > +static BlockNumber\n> > > +BulkExtendSharedRelationBuffered(Relation rel,\n> > > +\t\t\t\t\t\t\t\t SMgrRelation smgr,\n> > > +\t\t\t\t\t\t\t\t bool skip_extension_lock,\n> > > +\t\t\t\t\t\t\t\t char relpersistence,\n> > > +\t\t\t\t\t\t\t\t ForkNumber fork, ReadBufferMode mode,\n> > > +\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> > > +\t\t\t\t\t\t\t\t uint32 *num_pages,\n> > > +\t\t\t\t\t\t\t\t uint32 num_locked_pages,\n> > > +\t\t\t\t\t\t\t\t Buffer *buffers)\n> > \n> > Ugh, that's a lot of arguments, some are inputs and some are outputs. I\n> > don't have any concrete suggestions, but could we simplify this somehow?\n> > Needs a comment at least.\n> \n> Yeah, I noticed this too. I think it would be easy enough to add a new\n> struct that can be passed as a pointer, which can be stack-allocated\n> by the caller, and which holds the input arguments that are common to\n> both functions, as is sensible.\n\nI played a fair bit with various options. I ended up not using a struct to\npass most options, but instead go for a flags argument. However, I did use a\nstruct for passing either relation or smgr.\n\n\ntypedef enum ExtendBufferedFlags {\n\tEB_SKIP_EXTENSION_LOCK = (1 << 0),\n\tEB_IN_RECOVERY = (1 << 1),\n\tEB_CREATE_FORK_IF_NEEDED = (1 << 2),\n\tEB_LOCK_FIRST = (1 << 3),\n\n\t/* internal flags follow */\n\tEB_RELEASE_PINS = (1 << 4),\n} ExtendBufferedFlags;\n\n/*\n * To identify the relation - either relation or smgr + relpersistence has to\n * be specified. Used via the EB_REL()/EB_SMGR() macros below. This allows us\n * to use the same function for both crash recovery and normal operation.\n */\ntypedef struct ExtendBufferedWhat\n{\n\tRelation rel;\n\tstruct SMgrRelationData *smgr;\n\tchar relpersistence;\n} ExtendBufferedWhat;\n\n#define EB_REL(p_rel) ((ExtendBufferedWhat){.rel = p_rel})\n/* requires use of EB_SKIP_EXTENSION_LOCK */\n#define EB_SMGR(p_smgr, p_relpersistence) ((ExtendBufferedWhat){.smgr = p_smgr, .relpersistence = p_relpersistence})\n\n\nextern Buffer ExtendBufferedRel(ExtendBufferedWhat eb,\n\t\t\t\t\t\t\t\tForkNumber forkNum,\n\t\t\t\t\t\t\t\tBufferAccessStrategy strategy,\n\t\t\t\t\t\t\t\tuint32 flags);\nextern BlockNumber ExtendBufferedRelBy(ExtendBufferedWhat eb,\n\t\t\t\t\t\t\t\t\t ForkNumber fork,\n\t\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n\t\t\t\t\t\t\t\t\t uint32 flags,\n\t\t\t\t\t\t\t\t\t uint32 extend_by,\n\t\t\t\t\t\t\t\t\t Buffer *buffers,\n\t\t\t\t\t\t\t\t\t uint32 *extended_by);\nextern Buffer ExtendBufferedRelTo(ExtendBufferedWhat eb,\n\t\t\t\t\t\t\t\t ForkNumber fork,\n\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n\t\t\t\t\t\t\t\t uint32 flags,\n\t\t\t\t\t\t\t\t BlockNumber extend_to,\n\t\t\t\t\t\t\t\t ReadBufferMode mode);\n\nAs you can see I removed ReadBufferMode from most of the functions (as\nsuggested by Heikki earlier). When extending by 1/multiple pages, we only need\nto know whether to lock or not.\n\nThe reason ExtendBufferedRelTo() has a 'mode' argument is that that allows to\nfall back to reading page normally if there was a concurrent relation\nextension.\n\nThe reason EB_CREATE_FORK_IF_NEEDED exists is to remove the duplicated,\ngnarly, code to do so from vm_extend(), fsm_extend().\n\n\nI'm not sure about the function naming pattern. I do like 'By' a lot more than\nthe Bulk prefix I used before.\n\n\nWhat do you think?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 28 Feb 2023 23:33:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 27/02/2023 23:45, Andres Freund wrote:\n> On 2023-02-27 18:06:22 +0200, Heikki Linnakangas wrote:\n>> We also do this in freespace.c and visibilitymap.c:\n>>\n>> /* Extend as needed. */\n>> while (fsm_nblocks_now < fsm_nblocks)\n>> {\n>> PageSetChecksumInplace((Page) pg.data, fsm_nblocks_now);\n>>\n>> smgrextend(reln, FSM_FORKNUM, fsm_nblocks_now,\n>> pg.data, false);\n>> fsm_nblocks_now++;\n>> }\n>>\n>> We could use the new smgrzeroextend function here. But it would be better to\n>> go through the buffer cache, because after this, the last block, at\n>> 'fsm_nblocks', will be read with ReadBuffer() and modified.\n> \n> I doubt it's a particularly crucial thing to optimize.\n\nYeah, it won't make any practical difference to performance. I'm more \nthinking if we can make this more consistent with other places where we \nextend a relation.\n\n> But, uh, isn't this code racy? Because this doesn't go through shared buffers,\n> there's no IO_IN_PROGRESS interlocking against a concurrent reader. We know\n> that writing pages isn't atomic vs readers. So another connection could\n> connection could see the new relation size, but a read might return a\n> partially written state of the page. Which then would cause checksum\n> failures. And even worse, I think it could lead to loosing a write, if the\n> concurrent connection writes out a page.\n\nfsm_readbuf and vm_readbuf check the relation size first, with \nsmgrnblocks(), before trying to read the page. So to have a problem, the \nsmgrnblocks() would have to already return the new size, but the \nsmgrread() would not return the new contents. I don't think that's \npossible, but not sure.\n\n>> We could use BulkExtendSharedRelationBuffered() to extend the relation and\n>> keep the last page locked, but the BulkExtendSharedRelationBuffered()\n>> signature doesn't allow that. It can return the first N pages locked, but\n>> there's no way to return the *last* page locked.\n> \n> We can't rely on bulk extending a, potentially, large number of pages in one\n> go anyway (since we might not be allowed to pin that many pages). So I don't\n> think requiring locking the last page is a really viable API.\n> \n> I think for this case I'd just just use the ExtendRelationTo() API we were\n> discussing nearby. Compared to the cost of reducing syscalls / filesystem\n> overhead to extend the relation, the cost of the buffer mapping lookup does't\n> seem significant. That's different in e.g. the hio.c case, because there we\n> need a buffer with free space, and concurrent activity could otherwise fill up\n> the buffer before we can lock it again.\n\nWorks for me.\n\n- Heikki\n\n\n\n",
"msg_date": "Wed, 1 Mar 2023 11:12:35 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-01 11:12:35 +0200, Heikki Linnakangas wrote:\n> On 27/02/2023 23:45, Andres Freund wrote:\n> > But, uh, isn't this code racy? Because this doesn't go through shared buffers,\n> > there's no IO_IN_PROGRESS interlocking against a concurrent reader. We know\n> > that writing pages isn't atomic vs readers. So another connection could\n> > connection could see the new relation size, but a read might return a\n> > partially written state of the page. Which then would cause checksum\n> > failures. And even worse, I think it could lead to loosing a write, if the\n> > concurrent connection writes out a page.\n> \n> fsm_readbuf and vm_readbuf check the relation size first, with\n> smgrnblocks(), before trying to read the page. So to have a problem, the\n> smgrnblocks() would have to already return the new size, but the smgrread()\n> would not return the new contents. I don't think that's possible, but not\n> sure.\n\nI hacked Thomas' program to test torn reads to ftruncate the file on the write\nside.\n\nIt frequently observes a file size that's not the write size (e.g. reading 4k\nwhen writing an 8k block).\n\nAfter extending the test to more than one reader, I indeed also see torn\nreads. So far all the tears have been at a 4k block boundary. However so far\nit always has been *prior* page contents, not 0s.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Mar 2023 09:02:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-01 09:02:00 -0800, Andres Freund wrote:\n> On 2023-03-01 11:12:35 +0200, Heikki Linnakangas wrote:\n> > On 27/02/2023 23:45, Andres Freund wrote:\n> > > But, uh, isn't this code racy? Because this doesn't go through shared buffers,\n> > > there's no IO_IN_PROGRESS interlocking against a concurrent reader. We know\n> > > that writing pages isn't atomic vs readers. So another connection could\n> > > connection could see the new relation size, but a read might return a\n> > > partially written state of the page. Which then would cause checksum\n> > > failures. And even worse, I think it could lead to loosing a write, if the\n> > > concurrent connection writes out a page.\n> > \n> > fsm_readbuf and vm_readbuf check the relation size first, with\n> > smgrnblocks(), before trying to read the page. So to have a problem, the\n> > smgrnblocks() would have to already return the new size, but the smgrread()\n> > would not return the new contents. I don't think that's possible, but not\n> > sure.\n> \n> I hacked Thomas' program to test torn reads to ftruncate the file on the write\n> side.\n> \n> It frequently observes a file size that's not the write size (e.g. reading 4k\n> when writing an 8k block).\n> \n> After extending the test to more than one reader, I indeed also see torn\n> reads. So far all the tears have been at a 4k block boundary. However so far\n> it always has been *prior* page contents, not 0s.\n\nOn tmpfs the failure rate is much higher, and we also end up reading 0s,\ndespite never writing them.\n\nI've attached my version of the test program.\n\next4: lots of 4k reads with 8k writes, some torn reads at 4k boundaries\nxfs: no issues\ntmpfs: loads of 4k reads with 8k writes, lots torn reads reading 0s, some torn reads at 4k boundaries\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 1 Mar 2023 09:25:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On 01/03/2023 09:33, Andres Freund wrote:\n> On 2023-02-21 17:33:31 +0100, Alvaro Herrera wrote:\n>> On 2023-Feb-21, Heikki Linnakangas wrote:\n>>\n>>>> +static BlockNumber\n>>>> +BulkExtendSharedRelationBuffered(Relation rel,\n>>>> +\t\t\t\t\t\t\t\t SMgrRelation smgr,\n>>>> +\t\t\t\t\t\t\t\t bool skip_extension_lock,\n>>>> +\t\t\t\t\t\t\t\t char relpersistence,\n>>>> +\t\t\t\t\t\t\t\t ForkNumber fork, ReadBufferMode mode,\n>>>> +\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n>>>> +\t\t\t\t\t\t\t\t uint32 *num_pages,\n>>>> +\t\t\t\t\t\t\t\t uint32 num_locked_pages,\n>>>> +\t\t\t\t\t\t\t\t Buffer *buffers)\n>>>\n>>> Ugh, that's a lot of arguments, some are inputs and some are outputs. I\n>>> don't have any concrete suggestions, but could we simplify this somehow?\n>>> Needs a comment at least.\n>>\n>> Yeah, I noticed this too. I think it would be easy enough to add a new\n>> struct that can be passed as a pointer, which can be stack-allocated\n>> by the caller, and which holds the input arguments that are common to\n>> both functions, as is sensible.\n> \n> I played a fair bit with various options. I ended up not using a struct to\n> pass most options, but instead go for a flags argument. However, I did use a\n> struct for passing either relation or smgr.\n> \n> \n> typedef enum ExtendBufferedFlags {\n> \tEB_SKIP_EXTENSION_LOCK = (1 << 0),\n> \tEB_IN_RECOVERY = (1 << 1),\n> \tEB_CREATE_FORK_IF_NEEDED = (1 << 2),\n> \tEB_LOCK_FIRST = (1 << 3),\n> \n> \t/* internal flags follow */\n> \tEB_RELEASE_PINS = (1 << 4),\n> } ExtendBufferedFlags;\n\nIs EB_IN_RECOVERY always set when RecoveryInProgress()? Is it really \nneeded? What does EB_LOCK_FIRST do?\n\n> /*\n> * To identify the relation - either relation or smgr + relpersistence has to\n> * be specified. Used via the EB_REL()/EB_SMGR() macros below. This allows us\n> * to use the same function for both crash recovery and normal operation.\n> */\n> typedef struct ExtendBufferedWhat\n> {\n> \tRelation rel;\n> \tstruct SMgrRelationData *smgr;\n> \tchar relpersistence;\n> } ExtendBufferedWhat;\n> \n> #define EB_REL(p_rel) ((ExtendBufferedWhat){.rel = p_rel})\n> /* requires use of EB_SKIP_EXTENSION_LOCK */\n> #define EB_SMGR(p_smgr, p_relpersistence) ((ExtendBufferedWhat){.smgr = p_smgr, .relpersistence = p_relpersistence})\n\nClever. I'm still not 100% convinced we need the EB_SMGR variant, but \nwith this we'll have the flexibility in any case.\n\n> extern Buffer ExtendBufferedRel(ExtendBufferedWhat eb,\n> \t\t\t\t\t\t\t\tForkNumber forkNum,\n> \t\t\t\t\t\t\t\tBufferAccessStrategy strategy,\n> \t\t\t\t\t\t\t\tuint32 flags);\n> extern BlockNumber ExtendBufferedRelBy(ExtendBufferedWhat eb,\n> \t\t\t\t\t\t\t\t\t ForkNumber fork,\n> \t\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> \t\t\t\t\t\t\t\t\t uint32 flags,\n> \t\t\t\t\t\t\t\t\t uint32 extend_by,\n> \t\t\t\t\t\t\t\t\t Buffer *buffers,\n> \t\t\t\t\t\t\t\t\t uint32 *extended_by);\n> extern Buffer ExtendBufferedRelTo(ExtendBufferedWhat eb,\n> \t\t\t\t\t\t\t\t ForkNumber fork,\n> \t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> \t\t\t\t\t\t\t\t uint32 flags,\n> \t\t\t\t\t\t\t\t BlockNumber extend_to,\n> \t\t\t\t\t\t\t\t ReadBufferMode mode);\n> \n> As you can see I removed ReadBufferMode from most of the functions (as\n> suggested by Heikki earlier). When extending by 1/multiple pages, we only need\n> to know whether to lock or not.\n\nOk, that's better. Still complex and a lot of arguments, but I don't \nhave any great suggestions on how to improve it.\n\n> The reason ExtendBufferedRelTo() has a 'mode' argument is that that allows to\n> fall back to reading page normally if there was a concurrent relation\n> extension.\n\nHmm, I think you'll need another return value, to let the caller know if \nthe relation was extended or not. Or a flag to ereport(ERROR) if the \npage already exists, for ginbuild() and friends.\n\n> The reason EB_CREATE_FORK_IF_NEEDED exists is to remove the duplicated,\n> gnarly, code to do so from vm_extend(), fsm_extend().\n\nMakes sense.\n\n> I'm not sure about the function naming pattern. I do like 'By' a lot more than\n> the Bulk prefix I used before.\n\n+1\n\n- Heikki\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 00:04:14 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-02 00:04:14 +0200, Heikki Linnakangas wrote:\n> On 01/03/2023 09:33, Andres Freund wrote:\n> > typedef enum ExtendBufferedFlags {\n> > \tEB_SKIP_EXTENSION_LOCK = (1 << 0),\n> > \tEB_IN_RECOVERY = (1 << 1),\n> > \tEB_CREATE_FORK_IF_NEEDED = (1 << 2),\n> > \tEB_LOCK_FIRST = (1 << 3),\n> >\n> > \t/* internal flags follow */\n> > \tEB_RELEASE_PINS = (1 << 4),\n> > } ExtendBufferedFlags;\n>\n> Is EB_IN_RECOVERY always set when RecoveryInProgress()? Is it really needed?\n\nRight now it's just passed in from the caller. It's at the moment just needed\nto know what to pass to smgrcreate(isRedo).\n\nHowever, XLogReadBufferExtended() doesn't currently use this path, so maybe\nit's not actually needed.\n\n\n> What does EB_LOCK_FIRST do?\n\nLock the first returned buffer, this is basically the replacement for\nnum_locked_buffers from the earlier version. I think it's likely that either\nlocking the first, or potentially at some later point locking all buffers, is\nall that's needed for ExtendBufferedRelBy().\n\nEB_LOCK_FIRST_BUFFER would perhaps be better?\n\n\n> > /*\n> > * To identify the relation - either relation or smgr + relpersistence has to\n> > * be specified. Used via the EB_REL()/EB_SMGR() macros below. This allows us\n> > * to use the same function for both crash recovery and normal operation.\n> > */\n> > typedef struct ExtendBufferedWhat\n> > {\n> > \tRelation rel;\n> > \tstruct SMgrRelationData *smgr;\n> > \tchar relpersistence;\n> > } ExtendBufferedWhat;\n> >\n> > #define EB_REL(p_rel) ((ExtendBufferedWhat){.rel = p_rel})\n> > /* requires use of EB_SKIP_EXTENSION_LOCK */\n> > #define EB_SMGR(p_smgr, p_relpersistence) ((ExtendBufferedWhat){.smgr = p_smgr, .relpersistence = p_relpersistence})\n>\n> Clever. I'm still not 100% convinced we need the EB_SMGR variant, but with\n> this we'll have the flexibility in any case.\n\nHm - how would you use it from XLogReadBufferExtended() without that?\n\n\nXLogReadBufferExtended() spends a disappointing amount of time in\nsmgropen(). Quite visible in profiles.\n\nIn the plan read case at least one in XLogReadBufferExtended()\nitself, then in ReadBufferWithoutRelcache(). The extension path right now is\nworse - it does one smgropen() for each extended block.\n\nI think we should avoid using ReadBufferWithoutRelcache() in\nXLogReadBufferExtended() in the read path as well, but that's for later.\n\n\n> > extern Buffer ExtendBufferedRel(ExtendBufferedWhat eb,\n> > \t\t\t\t\t\t\t\tForkNumber forkNum,\n> > \t\t\t\t\t\t\t\tBufferAccessStrategy strategy,\n> > \t\t\t\t\t\t\t\tuint32 flags);\n> > extern BlockNumber ExtendBufferedRelBy(ExtendBufferedWhat eb,\n> > \t\t\t\t\t\t\t\t\t ForkNumber fork,\n> > \t\t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> > \t\t\t\t\t\t\t\t\t uint32 flags,\n> > \t\t\t\t\t\t\t\t\t uint32 extend_by,\n> > \t\t\t\t\t\t\t\t\t Buffer *buffers,\n> > \t\t\t\t\t\t\t\t\t uint32 *extended_by);\n> > extern Buffer ExtendBufferedRelTo(ExtendBufferedWhat eb,\n> > \t\t\t\t\t\t\t\t ForkNumber fork,\n> > \t\t\t\t\t\t\t\t BufferAccessStrategy strategy,\n> > \t\t\t\t\t\t\t\t uint32 flags,\n> > \t\t\t\t\t\t\t\t BlockNumber extend_to,\n> > \t\t\t\t\t\t\t\t ReadBufferMode mode);\n> >\n> > As you can see I removed ReadBufferMode from most of the functions (as\n> > suggested by Heikki earlier). When extending by 1/multiple pages, we only need\n> > to know whether to lock or not.\n>\n> Ok, that's better. Still complex and a lot of arguments, but I don't have\n> any great suggestions on how to improve it.\n\nI don't think there are going to be all that many callers of\nExtendBufferedRelBy() and ExtendBufferedRelTo(), most are going to be\nExtendBufferedRel(), I think. So the complexity seems acceptable.\n\n\n> > The reason ExtendBufferedRelTo() has a 'mode' argument is that that allows to\n> > fall back to reading page normally if there was a concurrent relation\n> > extension.\n>\n> Hmm, I think you'll need another return value, to let the caller know if the\n> relation was extended or not. Or a flag to ereport(ERROR) if the page\n> already exists, for ginbuild() and friends.\n\nI don't think ginbuild() et al need to use ExtendBufferedRelTo()? A plain\nExtendBufferedRel() should suffice. The places that do need it are\nfsm_extend() and vm_extend() - I did end up avoiding the redundant lookup.\n\nBut I was wondering about a flag controlling this as well.\n\n\nAttached is my current version of this. Still needs more polishing, including\ncomments explaining the flags. But I thought it'd be useful to have it out\nthere.\n\nThere's two new patches in the series:\n- a patch to not initialize pages in the loop in fsm_extend(), vm_extend()\n anymore - we have a check about initializing pages at a later point, so\n there doesn't really seem to be a need for it?\n- a patch to use the new ExtendBufferedRelTo() in fsm_extend(), vm_extend()\n and XLogReadBufferExtended()\n\n\nIn this version I also tries to address some of the other feedback raised in\nthe thread. One thing I haven't decided what to do about yet is David's\nfeedback about a version of PinLocalBuffer() that doesn't adjust the\nusagecount, which wouldn't need to read the buf_state.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 1 Mar 2023 14:35:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-28 19:54:20 -0700, Andres Freund wrote:\n> I've done a fair bit of benchmarking of this patchset. For COPY it comes out\n> ahead everywhere. It's possible that there's a very small regression for\n> extremly IO miss heavy workloads, more below.\n> \n> \n> server \"base\" configuration:\n> \n> max_wal_size=150GB\n> shared_buffers=24GB\n> huge_pages=on\n> autovacuum=0\n> backend_flush_after=2MB\n> max_connections=5000\n> wal_buffers=128MB\n> wal_segment_size=1GB\n> \n> benchmark: pgbench running COPY into a single table. pgbench -t is set\n> according to the client count, so that the same amount of data is inserted.\n> This is done oth using small files ([1], ringbuffer not effective, no dirty\n> data to write out within the benchmark window) and a bit larger files ([2],\n> lots of data to write out due to ringbuffer).\n> \n> To make it a fair comparison HEAD includes the lwlock-waitqueue fix as well.\n> \n> s_b=24GB\n> \n> test: unlogged_small_files, format: text, files: 1024, 9015MB total\n> seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\n> clients HEAD HEAD patch patch no_fsm no_fsm\n> 1 58.63 207 50.22 242 54.35 224\n> 2 32.67 372 25.82 472 27.30 446\n> 4 22.53 540 13.30 916 14.33 851\n> 8 15.14 804 7.43 1640 7.48 1632\n> 16 14.69 829 4.79 2544 4.50 2718\n> 32 15.28 797 4.41 2763 3.32 3710\n> 64 15.34 794 5.22 2334 3.06 4061\n> 128 15.49 786 4.97 2452 3.13 3926\n> 256 15.85 768 5.02 2427 3.26 3769\n> 512 16.02 760 5.29 2303 3.54 3471\n\nI just spent a few hours trying to reproduce these benchmark results. For the\nlongest time I could not get the numbers for *HEAD* to even get close to the\nabove, while the numbers for the patch were very close.\n\nI was worried it was a performance regression in HEAD etc. But no, same git\ncommit as back then produces the same issue.\n\n\nAs it turns out, I somehow screwed up my benchmark tooling, and I did not set\nset the CPU \"scaling_governor\" and \"energy_performance_preference\" to\n\"performance\". In a crazy turn of events, that approximately makes no\ndifference with the patch applied, and a 2x difference for HEAD.\n\n\nI suspect this is some pathological issue when encountering heavy lock\ncontention (likely leading to the CPU reducing speed into a deeper state,\nwhich then takes longer to get out of when the lock is released). As the lock\ncontention is drastically reduced with the patch, that affect is not visible\nanymore.\n\n\nAfter fixing the performance scaling issue, the results are quite close to the\nabove numbers again...\n\n\nAargh, I want my afternoon back.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Mar 2023 22:03:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nAttached is v5. Lots of comment polishing, a bit of renaming. I extracted the\nrelation extension related code in hio.c back into its own function.\n\nWhile reviewing the hio.c code, I did realize that too much stuff is done\nwhile holding the buffer lock. See also the pre-existing issue\nhttps://postgr.es/m/20230325025740.wzvchp2kromw4zqz%40awork3.anarazel.de\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 26 Mar 2023 12:26:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nBelow is my review of a slightly older version than you just posted --\nmuch of it you may have already addressed.\n\n From 3a6c3f41000e057bae12ab4431e6bb1c5f3ec4b0 Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Mon, 20 Mar 2023 21:57:40 -0700\nSubject: [PATCH v5 01/15] createdb-using-wal-fixes\n\nThis could use a more detailed commit message -- I don't really get what\nit is doing\n\n From 6faba69c241fd5513022bb042c33af09d91e84a6 Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Wed, 1 Jul 2020 19:06:45 -0700\nSubject: [PATCH v5 02/15] Add some error checking around pinning\n\n---\n src/backend/storage/buffer/bufmgr.c | 40 ++++++++++++++++++++---------\n src/include/storage/bufmgr.h | 1 +\n 2 files changed, 29 insertions(+), 12 deletions(-)\n\ndiff --git a/src/backend/storage/buffer/bufmgr.c\nb/src/backend/storage/buffer/bufmgr.c\nindex 95212a3941..fa20fab5a2 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -4283,6 +4287,25 @@ ConditionalLockBuffer(Buffer buffer)\n LW_EXCLUSIVE);\n }\n\n+void\n+BufferCheckOneLocalPin(Buffer buffer)\n+{\n+ if (BufferIsLocal(buffer))\n+ {\n+ /* There should be exactly one pin */\n+ if (LocalRefCount[-buffer - 1] != 1)\n+ elog(ERROR, \"incorrect local pin count: %d\",\n+ LocalRefCount[-buffer - 1]);\n+ }\n+ else\n+ {\n+ /* There should be exactly one local pin */\n+ if (GetPrivateRefCount(buffer) != 1)\n\nI'd rather this be an else if (was already like this, but, no reason not\nto change it now)\n\n+ elog(ERROR, \"incorrect local pin count: %d\",\n+ GetPrivateRefCount(buffer));\n+ }\n+}\n\n From 00d3044770478eea31e00fee8d1680f22ca6adde Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Mon, 27 Feb 2023 17:36:37 -0800\nSubject: [PATCH v5 04/15] Add smgrzeroextend(), FileZero(), FileFallocate()\n\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\nindex 9fd8444ed4..c34ed41d52 100644\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -2206,6 +2206,92 @@ FileSync(File file, uint32 wait_event_info)\n return returnCode;\n }\n\n+/*\n+ * Zero a region of the file.\n+ *\n+ * Returns 0 on success, -1 otherwise. In the latter case errno is set to the\n+ * appropriate error.\n+ */\n+int\n+FileZero(File file, off_t offset, off_t amount, uint32 wait_event_info)\n+{\n+ int returnCode;\n+ ssize_t written;\n+\n+ Assert(FileIsValid(file));\n+ returnCode = FileAccess(file);\n+ if (returnCode < 0)\n+ return returnCode;\n+\n+ pgstat_report_wait_start(wait_event_info);\n+ written = pg_pwrite_zeros(VfdCache[file].fd, amount, offset);\n+ pgstat_report_wait_end();\n+\n+ if (written < 0)\n+ return -1;\n+ else if (written != amount)\n\nthis doesn't need to be an else if\n\n+ {\n+ /* if errno is unset, assume problem is no disk space */\n+ if (errno == 0)\n+ errno = ENOSPC;\n+ return -1;\n+ }\n\n+int\n+FileFallocate(File file, off_t offset, off_t amount, uint32 wait_event_info)\n+{\n+#ifdef HAVE_POSIX_FALLOCATE\n+ int returnCode;\n+\n+ Assert(FileIsValid(file));\n+ returnCode = FileAccess(file);\n+ if (returnCode < 0)\n+ return returnCode;\n+\n+ pgstat_report_wait_start(wait_event_info);\n+ returnCode = posix_fallocate(VfdCache[file].fd, offset, amount);\n+ pgstat_report_wait_end();\n+\n+ if (returnCode == 0)\n+ return 0;\n+\n+ /* for compatibility with %m printing etc */\n+ errno = returnCode;\n+\n+ /*\n+ * Return in cases of a \"real\" failure, if fallocate is not supported,\n+ * fall through to the FileZero() backed implementation.\n+ */\n+ if (returnCode != EINVAL && returnCode != EOPNOTSUPP)\n+ return returnCode;\n\nI'm pretty sure you can just delete the below if statement\n\n+ if (returnCode == 0 ||\n+ (returnCode != EINVAL && returnCode != EINVAL))\n+ return returnCode;\n\ndiff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\nindex 352958e1fe..59a65a8305 100644\n--- a/src/backend/storage/smgr/md.c\n+++ b/src/backend/storage/smgr/md.c\n@@ -28,6 +28,7 @@\n #include \"access/xlog.h\"\n #include \"access/xlogutils.h\"\n #include \"commands/tablespace.h\"\n+#include \"common/file_utils.h\"\n #include \"miscadmin.h\"\n #include \"pg_trace.h\"\n #include \"pgstat.h\"\n@@ -500,6 +501,116 @@ mdextend(SMgrRelation reln, ForkNumber forknum,\nBlockNumber blocknum,\n Assert(_mdnblocks(reln, forknum, v) <= ((BlockNumber) RELSEG_SIZE));\n }\n\n+/*\n+ * mdzeroextend() -- Add ew zeroed out blocks to the specified relation.\n\nnot sure what ew is\n\n+ *\n+ * Similar to mdrextend(), except the relation can be extended by\n\nmdrextend->mdextend\n\n+ * multiple blocks at once, and that the added blocks will be\nfilled with\n\nI would lose the comma and just say \"and the added blocks will be filled...\"\n\n+void\n+mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n+ BlockNumber blocknum, int nblocks, bool skipFsync)\n\nSo, I think there are a few too many local variables in here, and it\nactually makes it more confusing.\nAssuming you would like to keep the input parameters blocknum and\nnblocks unmodified for debugging/other reasons, here is a suggested\nrefactor of this function\nAlso, I think you can combine the two error cases (I don't know if the\nuser cares what you were trying to extend the file with). I've done this\nbelow also.\n\nvoid\nmdzeroextend(SMgrRelation reln, ForkNumber forknum,\n BlockNumber blocknum, int nblocks, bool skipFsync)\n{\n MdfdVec *v;\n BlockNumber curblocknum = blocknum;\n int remblocks = nblocks;\n\n Assert(nblocks > 0);\n\n /* This assert is too expensive to have on normally ... */\n#ifdef CHECK_WRITE_VS_EXTEND\n Assert(blocknum >= mdnblocks(reln, forknum));\n#endif\n\n /*\n * If a relation manages to grow to 2^32-1 blocks, refuse to extend it any\n * more --- we mustn't create a block whose number actually is\n * InvalidBlockNumber or larger.\n */\n if ((uint64) blocknum + nblocks >= (uint64) InvalidBlockNumber)\n ereport(ERROR,\n (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"cannot extend file \\\"%s\\\" beyond %u blocks\",\n relpath(reln->smgr_rlocator, forknum),\n InvalidBlockNumber)));\n\n while (remblocks > 0)\n {\n int segstartblock = curblocknum % ((BlockNumber)\nRELSEG_SIZE);\n int numblocks = remblocks;\n off_t seekpos = (off_t) BLCKSZ * segstartblock;\n int ret;\n\n if (segstartblock + remblocks > RELSEG_SIZE)\n numblocks = RELSEG_SIZE - segstartblock;\n\n v = _mdfd_getseg(reln, forknum, curblocknum, skipFsync,\nEXTENSION_CREATE);\n\n /*\n * If available and useful, use posix_fallocate() (via FileAllocate())\n * to extend the relation. That's often more efficient than using\n * write(), as it commonly won't cause the kernel to allocate page\n * cache space for the extended pages.\n *\n * However, we don't use FileAllocate() for small extensions, as it\n * defeats delayed allocation on some filesystems. Not clear where\n * that decision should be made though? For now just use a cutoff of\n * 8, anything between 4 and 8 worked OK in some local testing.\n */\n if (numblocks > 8)\n ret = FileFallocate(v->mdfd_vfd,\n seekpos, (off_t) BLCKSZ * numblocks,\n WAIT_EVENT_DATA_FILE_EXTEND);\n else\n /*\n * Even if we don't want to use fallocate, we can still extend a\n * bit more efficiently than writing each 8kB block individually.\n * pg_pwrite_zeroes() (via FileZero()) uses\n * pg_pwritev_with_retry() to avoid multiple writes or needing a\n * zeroed buffer for the whole length of the extension.\n */\n ret = FileZero(v->mdfd_vfd,\n seekpos, (off_t) BLCKSZ * numblocks,\n WAIT_EVENT_DATA_FILE_EXTEND);\n\n if (ret != 0)\n ereport(ERROR,\n errcode_for_file_access(),\n errmsg(\"could not extend file \\\"%s\\\": %m\",\n FilePathName(v->mdfd_vfd)),\n errhint(\"Check free disk space.\"));\n\n if (!skipFsync && !SmgrIsTemp(reln))\n register_dirty_segment(reln, forknum, v);\n\n Assert(_mdnblocks(reln, forknum, v) <= ((BlockNumber) RELSEG_SIZE));\n\n remblocks -= numblocks;\n curblocknum += numblocks;\n }\n}\n\ndiff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c\nindex dc466e5414..5224ca5259 100644\n--- a/src/backend/storage/smgr/smgr.c\n+++ b/src/backend/storage/smgr/smgr.c\n@@ -50,6 +50,8 @@ typedef struct f_smgr\n+/*\n+ * smgrzeroextend() -- Add new zeroed out blocks to a file.\n+ *\n+ * Similar to smgrextend(), except the relation can be extended by\n+ * multiple blocks at once, and that the added blocks will be\nfilled with\n+ * zeroes.\n+ */\n\nSimilar grammatical feedback as mdzeroextend.\n\n\n From ad7cd10a6c340d7f7d0adf26d5e39224dfd8439d Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Wed, 26 Oct 2022 12:05:07 -0700\nSubject: [PATCH v5 05/15] bufmgr: Add Pin/UnpinLocalBuffer()\n\ndiff --git a/src/backend/storage/buffer/bufmgr.c\nb/src/backend/storage/buffer/bufmgr.c\nindex fa20fab5a2..6f50dbd212 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -4288,18 +4268,16 @@ ConditionalLockBuffer(Buffer buffer)\n }\n\n void\n-BufferCheckOneLocalPin(Buffer buffer)\n+BufferCheckWePinOnce(Buffer buffer)\n\nThis name is weird. Who is we?\n\ndiff --git a/src/backend/storage/buffer/localbuf.c\nb/src/backend/storage/buffer/localbuf.c\nindex 5325ddb663..798c5b93a8 100644\n--- a/src/backend/storage/buffer/localbuf.c\n+++ b/src/backend/storage/buffer/localbuf.c\n+bool\n+PinLocalBuffer(BufferDesc *buf_hdr, bool adjust_usagecount)\n+{\n+ uint32 buf_state;\n+ Buffer buffer = BufferDescriptorGetBuffer(buf_hdr);\n+ int bufid = -(buffer + 1);\n\nYou do\n int buffid = -buffer - 1;\nin UnpinLocalBuffer()\nThey should be consistent.\n\n int bufid = -(buffer + 1);\n\nI think this version is better:\n\n int buffid = -buffer - 1;\n\nSince if buffer is INT_MAX, then the -(buffer + 1) version invokes\nundefined behavior while the -buffer - 1 version doesn't.\n\n From a0228218e2ac299aac754eeb5b2be7ddfc56918d Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Fri, 17 Feb 2023 18:26:34 -0800\nSubject: [PATCH v5 07/15] bufmgr: Acquire and clean victim buffer separately\n\nPreviously we held buffer locks for two buffer mapping partitions at the same\ntime to change the identity of buffers. Particularly for extending relations\nneeding to hold the extension lock while acquiring a victim buffer is\npainful. By separating out the victim buffer acquisition, future commits will\nbe able to change relation extensions to scale better.\n\ndiff --git a/src/backend/storage/buffer/bufmgr.c\nb/src/backend/storage/buffer/bufmgr.c\nindex 3d0683593f..ea423ae484 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -1200,293 +1200,111 @@ BufferAlloc(SMgrRelation smgr, char\nrelpersistence, ForkNumber forkNum,\n\n /*\n * Buffer contents are currently invalid. Try to obtain the right to\n * start I/O. If StartBufferIO returns false, then someone else managed\n * to read it before we did, so there's nothing left for BufferAlloc() to\n * do.\n */\n- if (StartBufferIO(buf, true))\n+ if (StartBufferIO(victim_buf_hdr, true))\n *foundPtr = false;\n else\n *foundPtr = true;\n\nI know it was already like this, but since you edited the line already,\ncan we just make this this now?\n\n *foundPtr = !StartBufferIO(victim_buf_hdr, true);\n\n@@ -1595,6 +1413,237 @@ retry:\n StrategyFreeBuffer(buf);\n }\n\n+/*\n+ * Helper routine for GetVictimBuffer()\n+ *\n+ * Needs to be called on a buffer with a valid tag, pinned, but without the\n+ * buffer header spinlock held.\n+ *\n+ * Returns true if the buffer can be reused, in which case the buffer is only\n+ * pinned by this backend and marked as invalid, false otherwise.\n+ */\n+static bool\n+InvalidateVictimBuffer(BufferDesc *buf_hdr)\n+{\n+ /*\n+ * Clear out the buffer's tag and flags and usagecount. This is not\n+ * strictly required, as BM_TAG_VALID/BM_VALID needs to be checked before\n+ * doing anything with the buffer. But currently it's beneficial as the\n+ * pre-check for several linear scans of shared buffers just checks the\n+ * tag.\n\nI don't really understand the above comment -- mainly the last sentence.\n\n+static Buffer\n+GetVictimBuffer(BufferAccessStrategy strategy, IOContext io_context)\n+{\n+ BufferDesc *buf_hdr;\n+ Buffer buf;\n+ uint32 buf_state;\n+ bool from_ring;\n+\n+ /*\n+ * Ensure, while the spinlock's not yet held, that there's a free refcount\n+ * entry.\n+ */\n+ ReservePrivateRefCountEntry();\n+ ResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n+\n+ /* we return here if a prospective victim buffer gets used concurrently */\n+again:\n\nWhy use goto instead of a loop here (again is the goto label)?\n\n\n From a7597b79dffaf96807f4a9beea0a39634530298d Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Mon, 24 Oct 2022 16:44:16 -0700\nSubject: [PATCH v5 08/15] bufmgr: Support multiple in-progress IOs by using\n resowner\n\nCommit message should describe why we couldn't support multiple\nin-progress IOs before, I think (e.g. we couldn't be sure that we\ncleared IO_IN_PROGRESS if something happened).\n\n@@ -4709,8 +4704,6 @@ TerminateBufferIO(BufferDesc *buf, bool\nclear_dirty, uint32 set_flag_bits)\n {\n uint32 buf_state;\n\nI noticed that the comment above TermianteBufferIO() says\n * TerminateBufferIO: release a buffer we were doing I/O on\n * (Assumptions)\n * My process is executing IO for the buffer\n\nCan we still say this is an assumption? What about when it is being\ncleaned up after being called from AbortBufferIO()\n\ndiff --git a/src/backend/utils/resowner/resowner.c\nb/src/backend/utils/resowner/resowner.c\nindex 19b6241e45..fccc59b39d 100644\n--- a/src/backend/utils/resowner/resowner.c\n+++ b/src/backend/utils/resowner/resowner.c\n@@ -121,6 +121,7 @@ typedef struct ResourceOwnerData\n\n /* We have built-in support for remembering: */\n ResourceArray bufferarr; /* owned buffers */\n+ ResourceArray bufferioarr; /* in-progress buffer IO */\n ResourceArray catrefarr; /* catcache references */\n ResourceArray catlistrefarr; /* catcache-list pins */\n ResourceArray relrefarr; /* relcache references */\n@@ -441,6 +442,7 @@ ResourceOwnerCreate(ResourceOwner parent, const char *name)\n\nMaybe worth mentioning in-progress buffer IO in resowner README? I know\nit doesn't claim to be exhaustive, so, up to you.\n\nAlso, I realize that existing code in this file has the extraneous\nparantheses, but maybe it isn't worth staying consistent with that?\nas in: &(owner->bufferioarr)\n\n+ */\n+void\n+ResourceOwnerRememberBufferIO(ResourceOwner owner, Buffer buffer)\n+{\n+ ResourceArrayAdd(&(owner->bufferioarr), BufferGetDatum(buffer));\n+}\n+\n\n\n From f26d1fa7e528d04436402aa8f94dc2442999dde3 Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Wed, 1 Mar 2023 13:24:19 -0800\nSubject: [PATCH v5 09/15] bufmgr: Move relation extension handling into\n ExtendBufferedRel{By,To,}\n\ndiff --git a/src/backend/storage/buffer/bufmgr.c\nb/src/backend/storage/buffer/bufmgr.c\nindex 3c95b87bca..4e07a5bc48 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n\n+/*\n+ * Extend relation by multiple blocks.\n+ *\n+ * Tries to extend the relation by extend_by blocks. Depending on the\n+ * availability of resources the relation may end up being extended by a\n+ * smaller number of pages (unless an error is thrown, always by at least one\n+ * page). *extended_by is updated to the number of pages the relation has been\n+ * extended to.\n+ *\n+ * buffers needs to be an array that is at least extend_by long. Upon\n+ * completion, the first extend_by array elements will point to a pinned\n+ * buffer.\n+ *\n+ * If EB_LOCK_FIRST is part of flags, the first returned buffer is\n+ * locked. This is useful for callers that want a buffer that is guaranteed to\n+ * be empty.\n\nThis should document what the returned BlockNumber is.\nAlso, instead of having extend_by and extended_by, how about just having\none which is set by the caller to the desired number to extend by and\nthen overwritten in this function to the value it successfully extended\nby.\n\nIt would be nice if the function returned the number it extended by\ninstead of the BlockNumber.\n\n+ */\n+BlockNumber\n+ExtendBufferedRelBy(ExtendBufferedWhat eb,\n+ ForkNumber fork,\n+ BufferAccessStrategy strategy,\n+ uint32 flags,\n+ uint32 extend_by,\n+ Buffer *buffers,\n+ uint32 *extended_by)\n+{\n+ Assert((eb.rel != NULL) ^ (eb.smgr != NULL));\n\nCan we turn these into !=\n\n Assert((eb.rel != NULL) != (eb.smgr != NULL));\n\nsince it is easier to understand.\n\n(it is also in ExtendBufferedRelTo())\n\n+ Assert(eb.smgr == NULL || eb.relpersistence != 0);\n+ Assert(extend_by > 0);\n+\n+ if (eb.smgr == NULL)\n+ {\n+ eb.smgr = RelationGetSmgr(eb.rel);\n+ eb.relpersistence = eb.rel->rd_rel->relpersistence;\n+ }\n+\n+ return ExtendBufferedRelCommon(eb, fork, strategy, flags,\n+ extend_by, InvalidBlockNumber,\n+ buffers, extended_by);\n+}\n\n+ * Extend the relation so it is at least extend_to blocks large, read buffer\n\nUse of \"read buffer\" here is confusing. We only read the block if, after\nwe try extending the relation, someone else already did so and we have\nto read the block they extended in, right?\n\n+ * (extend_to - 1).\n+ *\n+ * This is useful for callers that want to write a specific page, regardless\n+ * of the current size of the relation (e.g. useful for visibilitymap and for\n+ * crash recovery).\n+ */\n+Buffer\n+ExtendBufferedRelTo(ExtendBufferedWhat eb,\n+ ForkNumber fork,\n+ BufferAccessStrategy strategy,\n+ uint32 flags,\n+ BlockNumber extend_to,\n+ ReadBufferMode mode)\n+{\n\n+ while (current_size < extend_to)\n+ {\n\nCan declare buffers variable here.\n+ Buffer buffers[64];\n\n+ uint32 num_pages = lengthof(buffers);\n+ BlockNumber first_block;\n+\n+ if ((uint64) current_size + num_pages > extend_to)\n+ num_pages = extend_to - current_size;\n+\n+ first_block = ExtendBufferedRelCommon(eb, fork, strategy, flags,\n+ num_pages, extend_to,\n+ buffers, &extended_by);\n+\n+ current_size = first_block + extended_by;\n+ Assert(current_size <= extend_to);\n+ Assert(num_pages != 0 || current_size >= extend_to);\n+\n+ for (int i = 0; i < extended_by; i++)\n+ {\n+ if (first_block + i != extend_to - 1)\n\nIs there a way we could avoid pinning these other buffers to begin with\n(e.g. passing a parameter to ExtendBufferedRelCommon())\n\n+ ReleaseBuffer(buffers[i]);\n+ else\n+ buffer = buffers[i];\n+ }\n+ }\n\n+ /*\n+ * It's possible that another backend concurrently extended the\n+ * relation. In that case read the buffer.\n+ *\n+ * XXX: Should we control this via a flag?\n+ */\n\nI feel like there needs to be a more explicit comment about how you\ncould end up in this situation -- e.g. someone else extends the relation\nand so smgrnblocks returns a value that is greater than extend_to, so\nbuffer stays InvalidBuffer\n\n+ if (buffer == InvalidBuffer)\n+ {\n+ bool hit;\n+\n+ Assert(extended_by == 0);\n+ buffer = ReadBuffer_common(eb.smgr, eb.relpersistence,\n+ fork, extend_to - 1, mode, strategy,\n+ &hit);\n+ }\n+\n+ return buffer;\n+}\n\nDo we use compound literals? Here, this could be:\n\n buffer = ReadBuffer_common(eb.smgr, eb.relpersistence,\n fork, extend_to - 1, mode, strategy,\n &(bool) {0});\n\nTo eliminate the extraneous hit variable.\n\n\n\n /*\n * ReadBuffer_common -- common logic for all ReadBuffer variants\n@@ -801,35 +991,36 @@ ReadBuffer_common(SMgrRelation smgr, char\nrelpersistence, ForkNumber forkNum,\n bool found;\n IOContext io_context;\n IOObject io_object;\n- bool isExtend;\n bool isLocalBuf = SmgrIsTemp(smgr);\n\n *hit = false;\n\n+ /*\n+ * Backward compatibility path, most code should use\n+ * ExtendRelationBuffered() instead, as acquiring the extension lock\n+ * inside ExtendRelationBuffered() scales a lot better.\n\nThink these are old function names in the comment\n\n+static BlockNumber\n+ExtendBufferedRelShared(ExtendBufferedWhat eb,\n+ ForkNumber fork,\n+ BufferAccessStrategy strategy,\n+ uint32 flags,\n+ uint32 extend_by,\n+ BlockNumber extend_upto,\n+ Buffer *buffers,\n+ uint32 *extended_by)\n+{\n+ BlockNumber first_block;\n+ IOContext io_context = IOContextForStrategy(strategy);\n+\n+ LimitAdditionalPins(&extend_by);\n+\n+ /*\n+ * Acquire victim buffers for extension without holding extension lock.\n+ * Writing out victim buffers is the most expensive part of extending the\n+ * relation, particularly when doing so requires WAL flushes. Zeroing out\n+ * the buffers is also quite expensive, so do that before holding the\n+ * extension lock as well.\n+ *\n+ * These pages are pinned by us and not valid. While we hold the pin they\n+ * can't be acquired as victim buffers by another backend.\n+ */\n+ for (uint32 i = 0; i < extend_by; i++)\n+ {\n+ Block buf_block;\n+\n+ buffers[i] = GetVictimBuffer(strategy, io_context);\n+ buf_block = BufHdrGetBlock(GetBufferDescriptor(buffers[i] - 1));\n+\n+ /* new buffers are zero-filled */\n+ MemSet((char *) buf_block, 0, BLCKSZ);\n+ }\n+\n+ /*\n+ * Lock relation against concurrent extensions, unless requested not to.\n+ *\n+ * We use the same extension lock for all forks. That's unnecessarily\n+ * restrictive, but currently extensions for forks don't happen often\n+ * enough to make it worth locking more granularly.\n+ *\n+ * Note that another backend might have extended the relation by the time\n+ * we get the lock.\n+ */\n+ if (!(flags & EB_SKIP_EXTENSION_LOCK))\n+ {\n+ LockRelationForExtension(eb.rel, ExclusiveLock);\n+ eb.smgr = RelationGetSmgr(eb.rel);\n+ }\n+\n+ /*\n+ * If requested, invalidate size cache, so that smgrnblocks asks the\n+ * kernel.\n+ */\n+ if (flags & EB_CLEAR_SIZE_CACHE)\n+ eb.smgr->smgr_cached_nblocks[fork] = InvalidBlockNumber;\n\nI don't see this in master, is it new?\n\n+ first_block = smgrnblocks(eb.smgr, fork);\n+\n\nThe below needs a better comment explaining what it is handling. e.g. if\nwe end up extending by less than we planned, unpin all of the surplus\nvictim buffers.\n\n+ if (extend_upto != InvalidBlockNumber)\n+ {\n+ uint32 old_num_pages = extend_by;\n\nmaybe call this something like original_extend_by\n\ndiff --git a/src/backend/storage/buffer/localbuf.c\nb/src/backend/storage/buffer/localbuf.c\nindex 5b44b0be8b..0528fddf99 100644\n--- a/src/backend/storage/buffer/localbuf.c\n+++ b/src/backend/storage/buffer/localbuf.c\n+BlockNumber\n+ExtendBufferedRelLocal(ExtendBufferedWhat eb,\n+ ForkNumber fork,\n+ uint32 flags,\n+ uint32 extend_by,\n+ BlockNumber extend_upto,\n+ Buffer *buffers,\n+ uint32 *extended_by)\n+{\n\n+ victim_buf_id = -(buffers[i] + 1);\n\nsame comment here as before.\n\n+ * Flags influencing the behaviour of ExtendBufferedRel*\n+ */\n+typedef enum ExtendBufferedFlags\n+{\n+ /*\n+ * Don't acquire extension lock. This is safe only if the relation isn't\n+ * shared, an access exclusive lock is held or if this is the startup\n+ * process.\n+ */\n+ EB_SKIP_EXTENSION_LOCK = (1 << 0),\n+\n+ /* Is this extension part of recovery? */\n+ EB_PERFORMING_RECOVERY = (1 << 1),\n+\n+ /*\n+ * Should the fork be created if it does not currently exist? This likely\n+ * only ever makes sense for relation forks.\n+ */\n+ EB_CREATE_FORK_IF_NEEDED = (1 << 2),\n+\n+ /* Should the first (possibly only) return buffer be returned locked? */\n+ EB_LOCK_FIRST = (1 << 3),\n+\n+ /* Should the smgr size cache be cleared? */\n+ EB_CLEAR_SIZE_CACHE = (1 << 4),\n+\n+ /* internal flags follow */\n\nI don't understand what this comment means (\"internal flags follow\")\n\n+ EB_LOCK_TARGET = (1 << 5),\n+} ExtendBufferedFlags;\n\n+typedef struct ExtendBufferedWhat\n\nMaybe this should be called like BufferedExtendTarget or something?\n\n+{\n+ Relation rel;\n+ struct SMgrRelationData *smgr;\n+ char relpersistence;\n+} ExtendBufferedWhat;\n\n From e4438c0eb87035e4cefd1de89458a8d88c90c0e3 Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Sun, 23 Oct 2022 14:44:43 -0700\nSubject: [PATCH v5 11/15] heapam: Add num_pages to RelationGetBufferForTuple()\n\nThis will be useful to compute the number of pages to extend a relation by.\n\n\ndiff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\nindex cf4b917eb4..500904897d 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -2050,6 +2053,33 @@ heap_prepare_insert(Relation relation,\nHeapTuple tup, TransactionId xid,\n return tup;\n }\n\n+/*\n+ * Helper for heap_multi_insert() that computes the number of full pages s\n\nno space after page before s\n\n+ */\n+static int\n+heap_multi_insert_pages(HeapTuple *heaptuples, int done, int ntuples,\nSize saveFreeSpace)\n+{\n+ size_t page_avail;\n+ int npages = 0;\n+\n+ page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace;\n+ npages++;\n\ncan this not just be this:\n\nsize_t page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace;\nint npages = 1;\n\n\n From 5d2be27caf8f4ee8f26841b2aa1674c90bd51754 Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Wed, 26 Oct 2022 14:14:11 -0700\nSubject: [PATCH v5 12/15] hio: Use ExtendBufferedRelBy()\n\n---\n src/backend/access/heap/hio.c | 285 +++++++++++++++++-----------------\n 1 file changed, 146 insertions(+), 139 deletions(-)\n\ndiff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c\nindex 65886839e7..48cfcff975 100644\n--- a/src/backend/access/heap/hio.c\n+++ b/src/backend/access/heap/hio.c\n@@ -354,6 +270,9 @@ RelationGetBufferForTuple(Relation relation, Size len,\n\nso in RelationGetBufferForTuple() up above where your changes start,\nthere is this code\n\n\n /*\n * We first try to put the tuple on the same page we last inserted a tuple\n * on, as cached in the BulkInsertState or relcache entry. If that\n * doesn't work, we ask the Free Space Map to locate a suitable page.\n * Since the FSM's info might be out of date, we have to be prepared to\n * loop around and retry multiple times. (To insure this isn't an infinite\n * loop, we must update the FSM with the correct amount of free space on\n * each page that proves not to be suitable.) If the FSM has no record of\n * a page with enough free space, we give up and extend the relation.\n *\n * When use_fsm is false, we either put the tuple onto the existing target\n * page or extend the relation.\n */\n if (bistate && bistate->current_buf != InvalidBuffer)\n {\n targetBlock = BufferGetBlockNumber(bistate->current_buf);\n }\n else\n targetBlock = RelationGetTargetBlock(relation);\n\n if (targetBlock == InvalidBlockNumber && use_fsm)\n {\n /*\n * We have no cached target page, so ask the FSM for an initial\n * target.\n */\n targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace);\n }\n\nAnd, I was thinking how, ReadBufferBI() only has one caller now\n(RelationGetBufferForTuple()) and, this caller basically already has\nchecked for the case in the inside of ReadBufferBI() (the code I pasted\nabove)\n\n /* If we have the desired block already pinned, re-pin and return it */\n if (bistate->current_buf != InvalidBuffer)\n {\n if (BufferGetBlockNumber(bistate->current_buf) == targetBlock)\n {\n /*\n * Currently the LOCK variants are only used for extending\n * relation, which should never reach this branch.\n */\n Assert(mode != RBM_ZERO_AND_LOCK &&\n mode != RBM_ZERO_AND_CLEANUP_LOCK);\n\n IncrBufferRefCount(bistate->current_buf);\n return bistate->current_buf;\n }\n /* ... else drop the old buffer */\n\nSo, I was thinking maybe there is some way to inline the logic for\nReadBufferBI(), because I think it would feel more streamlined to me.\n\n@@ -558,18 +477,46 @@ loop:\n ReleaseBuffer(buffer);\n }\n\nOh, and I forget which commit introduced BulkInsertState->next_free and\nlast_free, but I remember thinking that it didn't seem to fit with the\nother parts of that commit.\n\n- /* Without FSM, always fall out of the loop and extend */\n- if (!use_fsm)\n- break;\n+ if (bistate\n+ && bistate->next_free != InvalidBlockNumber\n+ && bistate->next_free <= bistate->last_free)\n+ {\n+ /*\n+ * We bulk extended the relation before, and there are still some\n+ * unused pages from that extension, so we don't need to look in\n+ * the FSM for a new page. But do record the free space from the\n+ * last page, somebody might insert narrower tuples later.\n+ */\n\nWhy couldn't we have found out that we bulk-extended before and get the\nblock from there up above the while loop?\n\n+ if (use_fsm)\n+ RecordPageWithFreeSpace(relation, targetBlock, pageFreeSpace);\n\n- /*\n- * Update FSM as to condition of this page, and ask for another page\n- * to try.\n- */\n- targetBlock = RecordAndGetPageWithFreeSpace(relation,\n- targetBlock,\n- pageFreeSpace,\n- targetFreeSpace);\n+ Assert(bistate->last_free != InvalidBlockNumber &&\n\nYou don't need the below half of the assert.\n\n+ bistate->next_free <= bistate->last_free);\n+ targetBlock = bistate->next_free;\n+ if (bistate->next_free >= bistate->last_free)\n\nthey can only be equal at this point\n\n+ {\n+ bistate->next_free = InvalidBlockNumber;\n+ bistate->last_free = InvalidBlockNumber;\n+ }\n+ else\n+ bistate->next_free++;\n+ }\n+ else if (!use_fsm)\n+ {\n+ /* Without FSM, always fall out of the loop and extend */\n+ break;\n+ }\n\nIt would be nice to have a comment explaining why this is in its own\nelse if instead of breaking earlier (i.e. !use_fsm is still a valid case\nin the if branch above it)\n\n+ else\n+ {\n+ /*\n+ * Update FSM as to condition of this page, and ask for another\n+ * page to try.\n+ */\n+ targetBlock = RecordAndGetPageWithFreeSpace(relation,\n+ targetBlock,\n+ pageFreeSpace,\n+ targetFreeSpace);\n+ }\n\nwe can get rid of needLock and waitcount variables like this\n\n+#define MAX_BUFFERS 64\n+ Buffer victim_buffers[MAX_BUFFERS];\n+ BlockNumber firstBlock = InvalidBlockNumber;\n+ BlockNumber firstBlockFSM = InvalidBlockNumber;\n+ BlockNumber curBlock;\n+ uint32 extend_by_pages;\n+ uint32 no_fsm_pages;\n+ uint32 waitcount;\n+\n+ extend_by_pages = num_pages;\n+\n+ /*\n+ * Multiply the number of pages to extend by the number of waiters. Do\n+ * this even if we're not using the FSM, as it does relieve\n+ * contention. Pages will be found via bistate->next_free.\n+ */\n+ if (needLock)\n+ waitcount = RelationExtensionLockWaiterCount(relation);\n+ else\n+ waitcount = 0;\n+ extend_by_pages += extend_by_pages * waitcount;\n\n if (!RELATION_IS_LOCAL(relation))\n extend_by_pages += extend_by_pages *\nRelationExtensionLockWaiterCount(relation);\n\n+\n+ /*\n+ * can't extend by more than MAX_BUFFERS, we need to pin them all\n+ * concurrently. FIXME: Need an NBuffers / MaxBackends type limit\n+ * here.\n+ */\n+ extend_by_pages = Min(extend_by_pages, MAX_BUFFERS);\n+\n+ /*\n+ * How many of the extended pages not to enter into the FSM.\n+ *\n+ * Only enter pages that we don't need ourselves into the FSM.\n+ * Otherwise every other backend will immediately try to use the pages\n+ * this backend neds itself, causing unnecessary contention.\n+ *\n+ * Bulk extended pages are remembered in bistate->next_free_buffer. So\n+ * without a bistate we can't directly make use of them.\n+ *\n+ * Never enter the page returned into the FSM, we'll immediately use\n+ * it.\n+ */\n+ if (num_pages > 1 && bistate == NULL)\n+ no_fsm_pages = 1;\n+ else\n+ no_fsm_pages = num_pages;\n\nthis is more clearly this:\n no_fsm_pages = bistate == NULL ? 1 : num_pages;\n\n- /*\n- * Release the file-extension lock; it's now OK for someone else to extend\n- * the relation some more.\n- */\n- if (needLock)\n- UnlockRelationForExtension(relation, ExclusiveLock);\n+ if (bistate)\n+ {\n+ if (extend_by_pages > 1)\n+ {\n+ bistate->next_free = firstBlock + 1;\n+ bistate->last_free = firstBlock + extend_by_pages - 1;\n+ }\n+ else\n+ {\n+ bistate->next_free = InvalidBlockNumber;\n+ bistate->last_free = InvalidBlockNumber;\n+ }\n+ }\n+\n+ buffer = victim_buffers[0];\n\nIf we move buffer = up, we can have only one if (bistate)\n\n+ if (bistate)\n+ {\n+ IncrBufferRefCount(buffer);\n+ bistate->current_buf = buffer;\n+ }\n+ }\n\nlike this:\n\n buffer = victim_buffers[0];\n\n if (bistate)\n {\n if (extend_by_pages > 1)\n {\n bistate->next_free = firstBlock + 1;\n bistate->last_free = firstBlock + extend_by_pages - 1;\n }\n else\n {\n bistate->next_free = InvalidBlockNumber;\n bistate->last_free = InvalidBlockNumber;\n }\n\n IncrBufferRefCount(buffer);\n bistate->current_buf = buffer;\n }\n\n\n From 6711e45bed59ee07ec277b9462f4745603a3d4a4 Mon Sep 17 00:00:00 2001\nFrom: Andres Freund <andres@anarazel.de>\nDate: Sun, 23 Oct 2022 14:41:46 -0700\nSubject: [PATCH v5 15/15] bufmgr: debug: Add PrintBuffer[Desc]\n\nUseful for development. Perhaps we should polish these and keep them?\ndiff --git a/src/backend/storage/buffer/bufmgr.c\nb/src/backend/storage/buffer/bufmgr.c\nindex 4e07a5bc48..0d382cd787 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n+\n+ fprintf(stderr, \"%d: [%u] msg: %s, rel: %s, block %u: refcount:\n%u / %u, usagecount: %u, flags:%s%s%s%s%s%s%s%s%s%s\\n\",\n+ MyProcPid,\n+ buffer,\n+ msg,\n+ path,\n+ blockno,\n+ BUF_STATE_GET_REFCOUNT(buf_state),\n+ GetPrivateRefCount(buffer),\n+ BUF_STATE_GET_USAGECOUNT(buf_state),\n+ buf_state & BM_LOCKED ? \" BM_LOCKED\" : \"\",\n+ buf_state & BM_DIRTY ? \" BM_DIRTY\" : \"\",\n+ buf_state & BM_VALID ? \" BM_VALID\" : \"\",\n+ buf_state & BM_TAG_VALID ? \" BM_TAG_VALID\" : \"\",\n+ buf_state & BM_IO_IN_PROGRESS ? \" BM_IO_IN_PROGRESS\" : \"\",\n+ buf_state & BM_IO_ERROR ? \" BM_IO_ERROR\" : \"\",\n+ buf_state & BM_JUST_DIRTIED ? \" BM_JUST_DIRTIED\" : \"\",\n+ buf_state & BM_PIN_COUNT_WAITER ? \" BM_PIN_COUNT_WAITER\" : \"\",\n+ buf_state & BM_CHECKPOINT_NEEDED ? \" BM_CHECKPOINT_NEEDED\" : \"\",\n+ buf_state & BM_PERMANENT ? \" BM_PERMANENT\" : \"\"\n+ );\n+}\n\nHow about this\n\n#define FLAG_DESC(flag) (buf_state & (flag) ? \" \" #flag : \"\")\n FLAG_DESC(BM_LOCKED),\n FLAG_DESC(BM_DIRTY),\n FLAG_DESC(BM_VALID),\n FLAG_DESC(BM_TAG_VALID),\n FLAG_DESC(BM_IO_IN_PROGRESS),\n FLAG_DESC(BM_IO_ERROR),\n FLAG_DESC(BM_JUST_DIRTIED),\n FLAG_DESC(BM_PIN_COUNT_WAITER),\n FLAG_DESC(BM_CHECKPOINT_NEEDED),\n FLAG_DESC(BM_PERMANENT)\n#undef FLAG_DESC\n\n+\n+void\n+PrintBuffer(Buffer buffer, const char *msg)\n+{\n+ BufferDesc *buf_hdr = GetBufferDescriptor(buffer - 1);\n\nno need for this variable\n\n+\n+ PrintBufferDesc(buf_hdr, msg);\n+}\n\n- Melanie\n\n\n",
"msg_date": "Sun, 26 Mar 2023 17:42:45 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "At Sun, 26 Mar 2023 12:26:59 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> Attached is v5. Lots of comment polishing, a bit of renaming. I extracted the\n> relation extension related code in hio.c back into its own function.\n> \n> While reviewing the hio.c code, I did realize that too much stuff is done\n> while holding the buffer lock. See also the pre-existing issue\n> https://postgr.es/m/20230325025740.wzvchp2kromw4zqz%40awork3.anarazel.de\n\n0001, 0002 looks fine to me.\n\n0003 adds the new function FileFallocte, but we already have\nAllocateFile. Although fd.c contains functions with varying word\norders, it could be confusing that closely named functions have\ndifferent naming conventions.\n\n+\t/*\n+\t * Return in cases of a \"real\" failure, if fallocate is not supported,\n+\t * fall through to the FileZero() backed implementation.\n+\t */\n+\tif (returnCode != EINVAL && returnCode != EOPNOTSUPP)\n+\t\treturn returnCode;\n\nI'm not entirely sure, but man 2 fallocate tells that ENOSYS also can\nbe returned. Some googling indicate that ENOSYS might need the same\namendment to EOPNOTSUPP. However, I'm not clear on why man\nposix_fallocate donsn't mention the former.\n\n\n+\t\t(returnCode != EINVAL && returnCode != EINVAL))\n:)\n\n\nFileGetRawDesc(File file)\n {\n \tAssert(FileIsValid(file));\n+\n+\tif (FileAccess(file) < 0)\n+\t\treturn -1;\n+\n\nThe function's comment is provided below.\n\n> * The returned file descriptor will be valid until the file is closed, but\n> * there are a lot of things that can make that happen. So the caller should\n> * be careful not to do much of anything else before it finishes using the\n> * returned file descriptor.\n\nSo, the responsibility to make sure the file is valid seems to lie\nwith the callers, although I'm not sure since there aren't any\nfunction users in the tree. I'm unclear as to why FileSize omits the\ncase lruLessRecently != file. When examining similar functions, such\nas FileGetRawFlags and FileGetRawMode, I'm puzzled to find that\nFileAccess() nor BasicOpenFilePermthe don't set the struct members\nreferred to by the functions. This makes my question the usefulness\nof these functions including FileGetRawDesc(). Regardless, since the\npatchset doesn't use FileGetRawDesc(), I don't believe the fix is\nnecessary in this patch set.\n\n+\tif ((uint64) blocknum + nblocks >= (uint64) InvalidBlockNumber)\n\nI'm not sure it is appropriate to assume InvalidBlockNumber equals\nMaxBlockNumber + 1 in this context.\n\n\n+\t\tint\t\t\tsegstartblock = curblocknum % ((BlockNumber) RELSEG_SIZE);\n+\t\tint\t\t\tsegendblock = (curblocknum % ((BlockNumber) RELSEG_SIZE)) + remblocks;\n+\t\toff_t\t\tseekpos = (off_t) BLCKSZ * segstartblock;\n\nsegendblock can be defined as \"segstartblock + remblocks\", which would\nbe clearer.\n\n+\t\t * If available and useful, use posix_fallocate() (via FileAllocate())\n\nFileFallocate()?\n\n\n+\t\t * However, we don't use FileAllocate() for small extensions, as it\n+\t\t * defeats delayed allocation on some filesystems. Not clear where\n+\t\t * that decision should be made though? For now just use a cutoff of\n+\t\t * 8, anything between 4 and 8 worked OK in some local testing.\n\nThe chose is quite similar to what FileFallocate() makes. However, I'm\nnot sure FileFallocate() itself should be doing this.\n\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Mar 2023 15:32:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-26 17:42:45 -0400, Melanie Plageman wrote:\n> Below is my review of a slightly older version than you just posted --\n> much of it you may have already addressed.\n\nFar from all is already - thanks for the review!\n\n\n> From 3a6c3f41000e057bae12ab4431e6bb1c5f3ec4b0 Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Mon, 20 Mar 2023 21:57:40 -0700\n> Subject: [PATCH v5 01/15] createdb-using-wal-fixes\n>\n> This could use a more detailed commit message -- I don't really get what\n> it is doing\n\nIt's a fix for a bug that I encountered while hacking on this, it since has\nbeen committed.\n\ncommit 5df319f3d55d09fadb4f7e4b58c5b476a3aeceb4\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2023-03-20 21:57:40 -0700\n\n Fix memory leak and inefficiency in CREATE DATABASE ... STRATEGY WAL_LOG\n\n RelationCopyStorageUsingBuffer() did not free the strategies used to access\n the source / target relation. They memory was released at the end of the\n transaction, but when using a template database with a lot of relations, the\n temporary leak can become big prohibitively big.\n\n RelationCopyStorageUsingBuffer() acquired the buffer for the target relation\n with RBM_NORMAL, therefore requiring a read of a block guaranteed to be\n zero. Use RBM_ZERO_AND_LOCK instead.\n\n Reviewed-by: Robert Haas <robertmhaas@gmail.com>\n Discussion: https://postgr.es/m/20230321070113.o2vqqxogjykwgfrr@awork3.anarazel.de\n Backpatch: 15-, where STRATEGY WAL_LOG was introduced\n\n\n> From 6faba69c241fd5513022bb042c33af09d91e84a6 Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 1 Jul 2020 19:06:45 -0700\n> Subject: [PATCH v5 02/15] Add some error checking around pinning\n>\n> ---\n> src/backend/storage/buffer/bufmgr.c | 40 ++++++++++++++++++++---------\n> src/include/storage/bufmgr.h | 1 +\n> 2 files changed, 29 insertions(+), 12 deletions(-)\n>\n\n> diff --git a/src/backend/storage/buffer/bufmgr.c\n> b/src/backend/storage/buffer/bufmgr.c\n> index 95212a3941..fa20fab5a2 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -4283,6 +4287,25 @@ ConditionalLockBuffer(Buffer buffer)\n> LW_EXCLUSIVE);\n> }\n>\n> +void\n> +BufferCheckOneLocalPin(Buffer buffer)\n> +{\n> + if (BufferIsLocal(buffer))\n> + {\n> + /* There should be exactly one pin */\n> + if (LocalRefCount[-buffer - 1] != 1)\n> + elog(ERROR, \"incorrect local pin count: %d\",\n> + LocalRefCount[-buffer - 1]);\n> + }\n> + else\n> + {\n> + /* There should be exactly one local pin */\n> + if (GetPrivateRefCount(buffer) != 1)\n>\n> I'd rather this be an else if (was already like this, but, no reason not\n> to change it now)\n\nI don't like that much - it'd break the symmetry between local / non-local.\n\n\n> +/*\n> + * Zero a region of the file.\n> + *\n> + * Returns 0 on success, -1 otherwise. In the latter case errno is set to the\n> + * appropriate error.\n> + */\n> +int\n> +FileZero(File file, off_t offset, off_t amount, uint32 wait_event_info)\n> +{\n> + int returnCode;\n> + ssize_t written;\n> +\n> + Assert(FileIsValid(file));\n> + returnCode = FileAccess(file);\n> + if (returnCode < 0)\n> + return returnCode;\n> +\n> + pgstat_report_wait_start(wait_event_info);\n> + written = pg_pwrite_zeros(VfdCache[file].fd, amount, offset);\n> + pgstat_report_wait_end();\n> +\n> + if (written < 0)\n> + return -1;\n> + else if (written != amount)\n>\n> this doesn't need to be an else if\n\nYou mean it could be a \"bare\" if instead? I don't really think that's clearer.\n\n\n> + {\n> + /* if errno is unset, assume problem is no disk space */\n> + if (errno == 0)\n> + errno = ENOSPC;\n> + return -1;\n> + }\n>\n> +int\n> +FileFallocate(File file, off_t offset, off_t amount, uint32 wait_event_info)\n> +{\n> +#ifdef HAVE_POSIX_FALLOCATE\n> + int returnCode;\n> +\n> + Assert(FileIsValid(file));\n> + returnCode = FileAccess(file);\n> + if (returnCode < 0)\n> + return returnCode;\n> +\n> + pgstat_report_wait_start(wait_event_info);\n> + returnCode = posix_fallocate(VfdCache[file].fd, offset, amount);\n> + pgstat_report_wait_end();\n> +\n> + if (returnCode == 0)\n> + return 0;\n> +\n> + /* for compatibility with %m printing etc */\n> + errno = returnCode;\n> +\n> + /*\n> + * Return in cases of a \"real\" failure, if fallocate is not supported,\n> + * fall through to the FileZero() backed implementation.\n> + */\n> + if (returnCode != EINVAL && returnCode != EOPNOTSUPP)\n> + return returnCode;\n>\n> I'm pretty sure you can just delete the below if statement\n>\n> + if (returnCode == 0 ||\n> + (returnCode != EINVAL && returnCode != EINVAL))\n> + return returnCode;\n\nHm. I don't see how - wouldn't that lead us to call FileZero(), even if\nFileFallocate() succeeded or failed (rather than not being supported)?\n\n>\n> +/*\n> + * mdzeroextend() -- Add ew zeroed out blocks to the specified relation.\n>\n> not sure what ew is\n\nA hurried new :)\n\n\n> + *\n> + * Similar to mdrextend(), except the relation can be extended by\n>\n> mdrextend->mdextend\n\n> + * multiple blocks at once, and that the added blocks will be\n> filled with\n>\n> I would lose the comma and just say \"and the added blocks will be filled...\"\n\nDone.\n\n\n> +void\n> +mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n> + BlockNumber blocknum, int nblocks, bool skipFsync)\n>\n> So, I think there are a few too many local variables in here, and it\n> actually makes it more confusing.\n> Assuming you would like to keep the input parameters blocknum and\n> nblocks unmodified for debugging/other reasons, here is a suggested\n> refactor of this function\n\nI'm mostly adopting this.\n\n\n> Also, I think you can combine the two error cases (I don't know if the\n> user cares what you were trying to extend the file with).\n\nHm. I do find it a somewhat useful distinction for figuring out problems - we\nhaven't used posix_fallocate for files so far, it seems plausible we'd hit\nsome portability issues. We could make it an errdetail(), I guess?\n\n\n\n> void\n> mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n> BlockNumber blocknum, int nblocks, bool skipFsync)\n> {\n> MdfdVec *v;\n> BlockNumber curblocknum = blocknum;\n> int remblocks = nblocks;\n> Assert(nblocks > 0);\n>\n> /* This assert is too expensive to have on normally ... */\n> #ifdef CHECK_WRITE_VS_EXTEND\n> Assert(blocknum >= mdnblocks(reln, forknum));\n> #endif\n>\n> /*\n> * If a relation manages to grow to 2^32-1 blocks, refuse to extend it any\n> * more --- we mustn't create a block whose number actually is\n> * InvalidBlockNumber or larger.\n> */\n> if ((uint64) blocknum + nblocks >= (uint64) InvalidBlockNumber)\n> ereport(ERROR,\n> (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> errmsg(\"cannot extend file \\\"%s\\\" beyond %u blocks\",\n> relpath(reln->smgr_rlocator, forknum),\n> InvalidBlockNumber)));\n>\n> while (remblocks > 0)\n> {\n> int segstartblock = curblocknum % ((BlockNumber)\n> RELSEG_SIZE);\n\nHm - this shouldn't be an int - that's my fault, not yours...\n\n\n\n\n> From ad7cd10a6c340d7f7d0adf26d5e39224dfd8439d Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 26 Oct 2022 12:05:07 -0700\n> Subject: [PATCH v5 05/15] bufmgr: Add Pin/UnpinLocalBuffer()\n>\n> diff --git a/src/backend/storage/buffer/bufmgr.c\n> b/src/backend/storage/buffer/bufmgr.c\n> index fa20fab5a2..6f50dbd212 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -4288,18 +4268,16 @@ ConditionalLockBuffer(Buffer buffer)\n> }\n>\n> void\n> -BufferCheckOneLocalPin(Buffer buffer)\n> +BufferCheckWePinOnce(Buffer buffer)\n>\n> This name is weird. Who is we?\n\nThe current backend. I.e. the function checks the current backend pins the\nbuffer exactly once, rather that *any* backend pins it once.\n\nI now see that BufferIsPinned() is named, IMO, misleadingly, more generally,\neven though it also just applies to pins by the current backend.\n\n\n> diff --git a/src/backend/storage/buffer/localbuf.c\n> b/src/backend/storage/buffer/localbuf.c\n> index 5325ddb663..798c5b93a8 100644\n> --- a/src/backend/storage/buffer/localbuf.c\n> +++ b/src/backend/storage/buffer/localbuf.c\n> +bool\n> +PinLocalBuffer(BufferDesc *buf_hdr, bool adjust_usagecount)\n> +{\n> + uint32 buf_state;\n> + Buffer buffer = BufferDescriptorGetBuffer(buf_hdr);\n> + int bufid = -(buffer + 1);\n>\n> You do\n> int buffid = -buffer - 1;\n> in UnpinLocalBuffer()\n> They should be consistent.\n>\n> int bufid = -(buffer + 1);\n>\n> I think this version is better:\n>\n> int buffid = -buffer - 1;\n>\n> Since if buffer is INT_MAX, then the -(buffer + 1) version invokes\n> undefined behavior while the -buffer - 1 version doesn't.\n\nYou are right! Not sure what I was doing there...\n\nAh - turns out there's pre-existing code in localbuf.c that do it that way :(\nSee at least MarkLocalBufferDirty().\n\nWe really need to wrap this in something, rather than open coding it all over\nbufmgr.c/localbuf.c. I really dislike this indexing :(.\n\n\n> From a0228218e2ac299aac754eeb5b2be7ddfc56918d Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Fri, 17 Feb 2023 18:26:34 -0800\n> Subject: [PATCH v5 07/15] bufmgr: Acquire and clean victim buffer separately\n>\n> Previously we held buffer locks for two buffer mapping partitions at the same\n> time to change the identity of buffers. Particularly for extending relations\n> needing to hold the extension lock while acquiring a victim buffer is\n> painful. By separating out the victim buffer acquisition, future commits will\n> be able to change relation extensions to scale better.\n>\n> diff --git a/src/backend/storage/buffer/bufmgr.c\n> b/src/backend/storage/buffer/bufmgr.c\n> index 3d0683593f..ea423ae484 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -1200,293 +1200,111 @@ BufferAlloc(SMgrRelation smgr, char\n> relpersistence, ForkNumber forkNum,\n>\n> /*\n> * Buffer contents are currently invalid. Try to obtain the right to\n> * start I/O. If StartBufferIO returns false, then someone else managed\n> * to read it before we did, so there's nothing left for BufferAlloc() to\n> * do.\n> */\n> - if (StartBufferIO(buf, true))\n> + if (StartBufferIO(victim_buf_hdr, true))\n> *foundPtr = false;\n> else\n> *foundPtr = true;\n>\n> I know it was already like this, but since you edited the line already,\n> can we just make this this now?\n>\n> *foundPtr = !StartBufferIO(victim_buf_hdr, true);\n\nHm, I do think it's easier to review if largely unchanged code is just moved\naround, rather also rewritten. So I'm hesitant to do that here.\n\n\n> @@ -1595,6 +1413,237 @@ retry:\n> StrategyFreeBuffer(buf);\n> }\n>\n> +/*\n> + * Helper routine for GetVictimBuffer()\n> + *\n> + * Needs to be called on a buffer with a valid tag, pinned, but without the\n> + * buffer header spinlock held.\n> + *\n> + * Returns true if the buffer can be reused, in which case the buffer is only\n> + * pinned by this backend and marked as invalid, false otherwise.\n> + */\n> +static bool\n> +InvalidateVictimBuffer(BufferDesc *buf_hdr)\n> +{\n> + /*\n> + * Clear out the buffer's tag and flags and usagecount. This is not\n> + * strictly required, as BM_TAG_VALID/BM_VALID needs to be checked before\n> + * doing anything with the buffer. But currently it's beneficial as the\n> + * pre-check for several linear scans of shared buffers just checks the\n> + * tag.\n>\n> I don't really understand the above comment -- mainly the last sentence.\n\nTo start with, it's s/checks/check/\n\n\"linear scans\" is a reference to functions like DropRelationBuffers(), which\niterate over all buffers, and just check the tag for a match. If we leave the\ntag around, it'll still work, as InvalidateBuffer() etc will figure out that\nthe buffer is invalid. But of course that's slower then just skipping the\nbuffer \"early on\".\n\n\n> +static Buffer\n> +GetVictimBuffer(BufferAccessStrategy strategy, IOContext io_context)\n> +{\n> + BufferDesc *buf_hdr;\n> + Buffer buf;\n> + uint32 buf_state;\n> + bool from_ring;\n> +\n> + /*\n> + * Ensure, while the spinlock's not yet held, that there's a free refcount\n> + * entry.\n> + */\n> + ReservePrivateRefCountEntry();\n> + ResourceOwnerEnlargeBuffers(CurrentResourceOwner);\n> +\n> + /* we return here if a prospective victim buffer gets used concurrently */\n> +again:\n>\n> Why use goto instead of a loop here (again is the goto label)?\n\nI find it way more readable this way. I'd use a loop if it were the common\ncase to loop, but it's the rare case, and for that I find the goto more\nreadable.\n\n\n\n> @@ -4709,8 +4704,6 @@ TerminateBufferIO(BufferDesc *buf, bool\n> clear_dirty, uint32 set_flag_bits)\n> {\n> uint32 buf_state;\n>\n> I noticed that the comment above TermianteBufferIO() says\n> * TerminateBufferIO: release a buffer we were doing I/O on\n> * (Assumptions)\n> * My process is executing IO for the buffer\n>\n> Can we still say this is an assumption? What about when it is being\n> cleaned up after being called from AbortBufferIO()\n\nThat hasn't really changed - it was already called by AbortBufferIO().\n\nI think it's still correct, too. We must have marked the IO as being in\nprogress to get there.\n\n\n> diff --git a/src/backend/utils/resowner/resowner.c\n> b/src/backend/utils/resowner/resowner.c\n> index 19b6241e45..fccc59b39d 100644\n> --- a/src/backend/utils/resowner/resowner.c\n> +++ b/src/backend/utils/resowner/resowner.c\n> @@ -121,6 +121,7 @@ typedef struct ResourceOwnerData\n>\n> /* We have built-in support for remembering: */\n> ResourceArray bufferarr; /* owned buffers */\n> + ResourceArray bufferioarr; /* in-progress buffer IO */\n> ResourceArray catrefarr; /* catcache references */\n> ResourceArray catlistrefarr; /* catcache-list pins */\n> ResourceArray relrefarr; /* relcache references */\n> @@ -441,6 +442,7 @@ ResourceOwnerCreate(ResourceOwner parent, const char *name)\n>\n> Maybe worth mentioning in-progress buffer IO in resowner README? I know\n> it doesn't claim to be exhaustive, so, up to you.\n\nHm. Given the few types of resources mentioned in the README, I don't think\nit's worth doing so.\n\n\n> Also, I realize that existing code in this file has the extraneous\n> parantheses, but maybe it isn't worth staying consistent with that?\n> as in: &(owner->bufferioarr)\n\nI personally don't find it worth being consistent with that, but if you /\nothers think it is, I'd be ok with adapting to that.\n\n\n> From f26d1fa7e528d04436402aa8f94dc2442999dde3 Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 1 Mar 2023 13:24:19 -0800\n> Subject: [PATCH v5 09/15] bufmgr: Move relation extension handling into\n> ExtendBufferedRel{By,To,}\n>\n> diff --git a/src/backend/storage/buffer/bufmgr.c\n> b/src/backend/storage/buffer/bufmgr.c\n> index 3c95b87bca..4e07a5bc48 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n>\n> +/*\n> + * Extend relation by multiple blocks.\n> + *\n> + * Tries to extend the relation by extend_by blocks. Depending on the\n> + * availability of resources the relation may end up being extended by a\n> + * smaller number of pages (unless an error is thrown, always by at least one\n> + * page). *extended_by is updated to the number of pages the relation has been\n> + * extended to.\n> + *\n> + * buffers needs to be an array that is at least extend_by long. Upon\n> + * completion, the first extend_by array elements will point to a pinned\n> + * buffer.\n> + *\n> + * If EB_LOCK_FIRST is part of flags, the first returned buffer is\n> + * locked. This is useful for callers that want a buffer that is guaranteed to\n> + * be empty.\n>\n> This should document what the returned BlockNumber is.\n\nOk.\n\n\n> Also, instead of having extend_by and extended_by, how about just having\n> one which is set by the caller to the desired number to extend by and\n> then overwritten in this function to the value it successfully extended\n> by.\n\nI had it that way at first - but I think it turned out to be more confusing.\n\n\n> It would be nice if the function returned the number it extended by\n> instead of the BlockNumber.\n\nIt's not actually free to get the block number from a buffer (it causes more\nsharing of the BufferDesc cacheline, which then makes modifications of the\ncacheline more expensive). We should work on removing all those\nBufferGetBlockNumber(). So I don't want to introduce a new function that\nrequires using BufferGetBlockNumber().\n\nSo I don't think this would be an improvement.\n\n\n> + Assert((eb.rel != NULL) ^ (eb.smgr != NULL));\n>\n> Can we turn these into !=\n>\n> Assert((eb.rel != NULL) != (eb.smgr != NULL));\n>\n> since it is easier to understand.\n\nDone.\n\n\n> + * Extend the relation so it is at least extend_to blocks large, read buffer\n>\n> Use of \"read buffer\" here is confusing. We only read the block if, after\n> we try extending the relation, someone else already did so and we have\n> to read the block they extended in, right?\n\nThat's one case, yes. I think there's also some unfortunate other case that\nI'd like to get rid of. See my archeology at\nhttps://postgr.es/m/20230223010147.32oir7sb66slqnjk%40awork3.anarazel.de\n\n\n\n> + uint32 num_pages = lengthof(buffers);\n> + BlockNumber first_block;\n> +\n> + if ((uint64) current_size + num_pages > extend_to)\n> + num_pages = extend_to - current_size;\n> +\n> + first_block = ExtendBufferedRelCommon(eb, fork, strategy, flags,\n> + num_pages, extend_to,\n> + buffers, &extended_by);\n> +\n> + current_size = first_block + extended_by;\n> + Assert(current_size <= extend_to);\n> + Assert(num_pages != 0 || current_size >= extend_to);\n> +\n> + for (int i = 0; i < extended_by; i++)\n> + {\n> + if (first_block + i != extend_to - 1)\n>\n> Is there a way we could avoid pinning these other buffers to begin with\n> (e.g. passing a parameter to ExtendBufferedRelCommon())\n\nWe can't avoid pinning them. We could make ExtendBufferedRelCommon() release\nthem though - but I'm not sure that'd be an improvement. I actually had a\nflag for that temporarily, but\n\n\n\n> + if (buffer == InvalidBuffer)\n> + {\n> + bool hit;\n> +\n> + Assert(extended_by == 0);\n> + buffer = ReadBuffer_common(eb.smgr, eb.relpersistence,\n> + fork, extend_to - 1, mode, strategy,\n> + &hit);\n> + }\n> +\n> + return buffer;\n> +}\n>\n> Do we use compound literals? Here, this could be:\n>\n> buffer = ReadBuffer_common(eb.smgr, eb.relpersistence,\n> fork, extend_to - 1, mode, strategy,\n> &(bool) {0});\n>\n> To eliminate the extraneous hit variable.\n\nWe do use compound literals in a few places. However, I don't think it's a\ngood idea to pass a pointer to a temporary. At least I need to look up the\nlifetime rules for those every time. And this isn't a huge win, so I wouldn't\ngo for it here.\n\n\n\n> /*\n> * ReadBuffer_common -- common logic for all ReadBuffer variants\n> @@ -801,35 +991,36 @@ ReadBuffer_common(SMgrRelation smgr, char\n> relpersistence, ForkNumber forkNum,\n> bool found;\n> IOContext io_context;\n> IOObject io_object;\n> - bool isExtend;\n> bool isLocalBuf = SmgrIsTemp(smgr);\n>\n> *hit = false;\n>\n> + /*\n> + * Backward compatibility path, most code should use\n> + * ExtendRelationBuffered() instead, as acquiring the extension lock\n> + * inside ExtendRelationBuffered() scales a lot better.\n>\n> Think these are old function names in the comment\n\nIndeed.\n\n\n> +static BlockNumber\n> +ExtendBufferedRelShared(ExtendBufferedWhat eb,\n> + ForkNumber fork,\n> + BufferAccessStrategy strategy,\n> + uint32 flags,\n> + uint32 extend_by,\n> + BlockNumber extend_upto,\n> + Buffer *buffers,\n> + uint32 *extended_by)\n> +{\n> + BlockNumber first_block;\n> + IOContext io_context = IOContextForStrategy(strategy);\n> +\n> + LimitAdditionalPins(&extend_by);\n> +\n> + /*\n> + * Acquire victim buffers for extension without holding extension lock.\n> + * Writing out victim buffers is the most expensive part of extending the\n> + * relation, particularly when doing so requires WAL flushes. Zeroing out\n> + * the buffers is also quite expensive, so do that before holding the\n> + * extension lock as well.\n> + *\n> + * These pages are pinned by us and not valid. While we hold the pin they\n> + * can't be acquired as victim buffers by another backend.\n> + */\n> + for (uint32 i = 0; i < extend_by; i++)\n> + {\n> + Block buf_block;\n> +\n> + buffers[i] = GetVictimBuffer(strategy, io_context);\n> + buf_block = BufHdrGetBlock(GetBufferDescriptor(buffers[i] - 1));\n> +\n> + /* new buffers are zero-filled */\n> + MemSet((char *) buf_block, 0, BLCKSZ);\n> + }\n> +\n> + /*\n> + * Lock relation against concurrent extensions, unless requested not to.\n> + *\n> + * We use the same extension lock for all forks. That's unnecessarily\n> + * restrictive, but currently extensions for forks don't happen often\n> + * enough to make it worth locking more granularly.\n> + *\n> + * Note that another backend might have extended the relation by the time\n> + * we get the lock.\n> + */\n> + if (!(flags & EB_SKIP_EXTENSION_LOCK))\n> + {\n> + LockRelationForExtension(eb.rel, ExclusiveLock);\n> + eb.smgr = RelationGetSmgr(eb.rel);\n> + }\n> +\n> + /*\n> + * If requested, invalidate size cache, so that smgrnblocks asks the\n> + * kernel.\n> + */\n> + if (flags & EB_CLEAR_SIZE_CACHE)\n> + eb.smgr->smgr_cached_nblocks[fork] = InvalidBlockNumber;\n>\n> I don't see this in master, is it new?\n\nNot really - it's just elsewhere. See vm_extend() and fsm_extend(). I could\nmove this part back into \"Convert a few places to ExtendBufferedRelTo\", but I\ndoin't think that'd be better.\n\n\n> + * Flags influencing the behaviour of ExtendBufferedRel*\n> + */\n> +typedef enum ExtendBufferedFlags\n> +{\n> + /*\n> + * Don't acquire extension lock. This is safe only if the relation isn't\n> + * shared, an access exclusive lock is held or if this is the startup\n> + * process.\n> + */\n> + EB_SKIP_EXTENSION_LOCK = (1 << 0),\n> +\n> + /* Is this extension part of recovery? */\n> + EB_PERFORMING_RECOVERY = (1 << 1),\n> +\n> + /*\n> + * Should the fork be created if it does not currently exist? This likely\n> + * only ever makes sense for relation forks.\n> + */\n> + EB_CREATE_FORK_IF_NEEDED = (1 << 2),\n> +\n> + /* Should the first (possibly only) return buffer be returned locked? */\n> + EB_LOCK_FIRST = (1 << 3),\n> +\n> + /* Should the smgr size cache be cleared? */\n> + EB_CLEAR_SIZE_CACHE = (1 << 4),\n> +\n> + /* internal flags follow */\n>\n> I don't understand what this comment means (\"internal flags follow\")\n\nHm - just that the flags defined subsequently are for internal use, not for\ncallers to specify.\n\n\n\n\n\n> + */\n> +static int\n> +heap_multi_insert_pages(HeapTuple *heaptuples, int done, int ntuples,\n> Size saveFreeSpace)\n> +{\n> + size_t page_avail;\n> + int npages = 0;\n> +\n> + page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace;\n> + npages++;\n>\n> can this not just be this:\n>\n> size_t page_avail = BLCKSZ - SizeOfPageHeaderData - saveFreeSpace;\n> int npages = 1;\n\nYes.\n\n\n>\n> From 5d2be27caf8f4ee8f26841b2aa1674c90bd51754 Mon Sep 17 00:00:00 2001\n> From: Andres Freund <andres@anarazel.de>\n> Date: Wed, 26 Oct 2022 14:14:11 -0700\n> Subject: [PATCH v5 12/15] hio: Use ExtendBufferedRelBy()\n\n> ---\n> src/backend/access/heap/hio.c | 285 +++++++++++++++++-----------------\n> 1 file changed, 146 insertions(+), 139 deletions(-)\n>\n> diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c\n> index 65886839e7..48cfcff975 100644\n> --- a/src/backend/access/heap/hio.c\n> +++ b/src/backend/access/heap/hio.c\n> @@ -354,6 +270,9 @@ RelationGetBufferForTuple(Relation relation, Size len,\n>\n> so in RelationGetBufferForTuple() up above where your changes start,\n> there is this code\n>\n>\n> /*\n> * We first try to put the tuple on the same page we last inserted a tuple\n> * on, as cached in the BulkInsertState or relcache entry. If that\n> * doesn't work, we ask the Free Space Map to locate a suitable page.\n> * Since the FSM's info might be out of date, we have to be prepared to\n> * loop around and retry multiple times. (To insure this isn't an infinite\n> * loop, we must update the FSM with the correct amount of free space on\n> * each page that proves not to be suitable.) If the FSM has no record of\n> * a page with enough free space, we give up and extend the relation.\n> *\n> * When use_fsm is false, we either put the tuple onto the existing target\n> * page or extend the relation.\n> */\n> if (bistate && bistate->current_buf != InvalidBuffer)\n> {\n> targetBlock = BufferGetBlockNumber(bistate->current_buf);\n> }\n> else\n> targetBlock = RelationGetTargetBlock(relation);\n>\n> if (targetBlock == InvalidBlockNumber && use_fsm)\n> {\n> /*\n> * We have no cached target page, so ask the FSM for an initial\n> * target.\n> */\n> targetBlock = GetPageWithFreeSpace(relation, targetFreeSpace);\n> }\n>\n> And, I was thinking how, ReadBufferBI() only has one caller now\n> (RelationGetBufferForTuple()) and, this caller basically already has\n> checked for the case in the inside of ReadBufferBI() (the code I pasted\n> above)\n>\n> /* If we have the desired block already pinned, re-pin and return it */\n> if (bistate->current_buf != InvalidBuffer)\n> {\n> if (BufferGetBlockNumber(bistate->current_buf) == targetBlock)\n> {\n> /*\n> * Currently the LOCK variants are only used for extending\n> * relation, which should never reach this branch.\n> */\n> Assert(mode != RBM_ZERO_AND_LOCK &&\n> mode != RBM_ZERO_AND_CLEANUP_LOCK);\n>\n> IncrBufferRefCount(bistate->current_buf);\n> return bistate->current_buf;\n> }\n> /* ... else drop the old buffer */\n>\n> So, I was thinking maybe there is some way to inline the logic for\n> ReadBufferBI(), because I think it would feel more streamlined to me.\n\nI don't really see how - I'd welcome suggestions?\n\n\n> @@ -558,18 +477,46 @@ loop:\n> ReleaseBuffer(buffer);\n> }\n>\n> Oh, and I forget which commit introduced BulkInsertState->next_free and\n> last_free, but I remember thinking that it didn't seem to fit with the\n> other parts of that commit.\n\nI'll move it into the one using ExtendBufferedRelBy().\n\n\n> - /* Without FSM, always fall out of the loop and extend */\n> - if (!use_fsm)\n> - break;\n> + if (bistate\n> + && bistate->next_free != InvalidBlockNumber\n> + && bistate->next_free <= bistate->last_free)\n> + {\n> + /*\n> + * We bulk extended the relation before, and there are still some\n> + * unused pages from that extension, so we don't need to look in\n> + * the FSM for a new page. But do record the free space from the\n> + * last page, somebody might insert narrower tuples later.\n> + */\n>\n> Why couldn't we have found out that we bulk-extended before and get the\n> block from there up above the while loop?\n\nI'm not quite sure I follow - above the loop there might still have been space\non the prior page? We also need the ability to loop if the space has been used\nsince.\n\nI guess there's an argument for also checking above the loop, but I don't\nthink that'd currently ever be reachable.\n\n\n> + {\n> + bistate->next_free = InvalidBlockNumber;\n> + bistate->last_free = InvalidBlockNumber;\n> + }\n> + else\n> + bistate->next_free++;\n> + }\n> + else if (!use_fsm)\n> + {\n> + /* Without FSM, always fall out of the loop and extend */\n> + break;\n> + }\n>\n> It would be nice to have a comment explaining why this is in its own\n> else if instead of breaking earlier (i.e. !use_fsm is still a valid case\n> in the if branch above it)\n\nI'm not quite following. Breaking where earlier?\n\nNote that that branch is old code, it's just that a new way of getting a page\nthat potentially has free space is preceding it.\n\n\n\n> we can get rid of needLock and waitcount variables like this\n>\n> +#define MAX_BUFFERS 64\n> + Buffer victim_buffers[MAX_BUFFERS];\n> + BlockNumber firstBlock = InvalidBlockNumber;\n> + BlockNumber firstBlockFSM = InvalidBlockNumber;\n> + BlockNumber curBlock;\n> + uint32 extend_by_pages;\n> + uint32 no_fsm_pages;\n> + uint32 waitcount;\n> +\n> + extend_by_pages = num_pages;\n> +\n> + /*\n> + * Multiply the number of pages to extend by the number of waiters. Do\n> + * this even if we're not using the FSM, as it does relieve\n> + * contention. Pages will be found via bistate->next_free.\n> + */\n> + if (needLock)\n> + waitcount = RelationExtensionLockWaiterCount(relation);\n> + else\n> + waitcount = 0;\n> + extend_by_pages += extend_by_pages * waitcount;\n>\n> if (!RELATION_IS_LOCAL(relation))\n> extend_by_pages += extend_by_pages *\n> RelationExtensionLockWaiterCount(relation);\n\nI guess I find it useful to be able to quickly add logging messages for stuff\nlike this. I don't think local variables are as bad as you make them out to be\n:)\n\nThanks for the review!\n\nAndres\n\n\n",
"msg_date": "Tue, 28 Mar 2023 20:47:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-27 15:32:47 +0900, Kyotaro Horiguchi wrote:\n> At Sun, 26 Mar 2023 12:26:59 -0700, Andres Freund <andres@anarazel.de> wrote in\n> > Hi,\n> >\n> > Attached is v5. Lots of comment polishing, a bit of renaming. I extracted the\n> > relation extension related code in hio.c back into its own function.\n> >\n> > While reviewing the hio.c code, I did realize that too much stuff is done\n> > while holding the buffer lock. See also the pre-existing issue\n> > https://postgr.es/m/20230325025740.wzvchp2kromw4zqz%40awork3.anarazel.de\n>\n> 0001, 0002 looks fine to me.\n>\n> 0003 adds the new function FileFallocte, but we already have\n> AllocateFile. Although fd.c contains functions with varying word\n> orders, it could be confusing that closely named functions have\n> different naming conventions.\n\nThe syscall is named fallocate, I don't think we'd gain anything by inventing\na different name for it? Given that there's a number of File$syscall\noperations, I think it's clear enough that it just fits into that. Unless you\nhave a better proposal?\n\n\n> +\t/*\n> +\t * Return in cases of a \"real\" failure, if fallocate is not supported,\n> +\t * fall through to the FileZero() backed implementation.\n> +\t */\n> +\tif (returnCode != EINVAL && returnCode != EOPNOTSUPP)\n> +\t\treturn returnCode;\n>\n> I'm not entirely sure, but man 2 fallocate tells that ENOSYS also can\n> be returned. Some googling indicate that ENOSYS might need the same\n> amendment to EOPNOTSUPP. However, I'm not clear on why man\n> posix_fallocate donsn't mention the former.\n\nposix_fallocate() and its errors are specified by posix, I guess. I think\nglibc etc will map ENOSYS to EOPNOTSUPP.\n\nI really dislike this bit from the posix_fallocate manpage:\n\nEINVAL offset was less than 0, or len was less than or equal to 0, or the underlying filesystem does not support the operation.\n\nWhy oh why would you add the \"or ..\" portion into EINVAL, when there's also\nEOPNOTSUPP?\n\n>\n> +\t\t(returnCode != EINVAL && returnCode != EINVAL))\n> :)\n>\n>\n> FileGetRawDesc(File file)\n> {\n> \tAssert(FileIsValid(file));\n> +\n> +\tif (FileAccess(file) < 0)\n> +\t\treturn -1;\n> +\n>\n> The function's comment is provided below.\n>\n> > * The returned file descriptor will be valid until the file is closed, but\n> > * there are a lot of things that can make that happen. So the caller should\n> > * be careful not to do much of anything else before it finishes using the\n> > * returned file descriptor.\n>\n> So, the responsibility to make sure the file is valid seems to lie\n> with the callers, although I'm not sure since there aren't any\n> function users in the tree.\n\nExcept, as I think you realized as well, external callers *can't* call\nFileAccess(), it's static.\n\n\n> I'm unclear as to why FileSize omits the case lruLessRecently != file.\n\nNot quite following - why would FileSize() deal with lruLessRecently itself?\nOr do you mean why FileSize() uses FileIsNotOpen() itself, rather than relying\non FileAccess() doing that internally?\n\n\n> When examining similar functions, such as FileGetRawFlags and\n> FileGetRawMode, I'm puzzled to find that FileAccess() nor\n> BasicOpenFilePermthe don't set the struct members referred to by the\n> functions.\n\nThose aren't involved with LRU mechanism, IIRC. Note that BasicOpenFilePerm()\nreturns an actual fd, not a File. So you can't call FileGetRawMode() on it. As\nBasicOpenFilePerm() says:\n * This is exported for use by places that really want a plain kernel FD,\n * but need to be proof against running out of FDs. ...\n\n\nI don't think FileAccess() needs to set those struct members, that's already\nbeen done in PathNameOpenFilePerm().\n\n\n> This makes my question the usefulness of these functions including\n> FileGetRawDesc().\n\nIt's quite weird that we have FileGetRawDesc(), but don't allow to use it in a\nsafe way...\n\n\n> Regardless, since the\n> patchset doesn't use FileGetRawDesc(), I don't believe the fix is\n> necessary in this patch set.\n\nYea. It was used in an earlier version, but not anymore.\n\n\n\n>\n> +\tif ((uint64) blocknum + nblocks >= (uint64) InvalidBlockNumber)\n>\n> I'm not sure it is appropriate to assume InvalidBlockNumber equals\n> MaxBlockNumber + 1 in this context.\n\nHm. This is just the multi-block equivalent of what mdextend() already\ndoes. It's not pretty, indeed. I'm not sure there's really a better thing to\ndo for mdzeroextend(), given the mdextend() precedent? mdzeroextend() (just as\nmdextend()) will be called with blockNum == InvalidBlockNumber, if you try to\nextend past the size limit.\n\n\n>\n> +\t\t * However, we don't use FileAllocate() for small extensions, as it\n> +\t\t * defeats delayed allocation on some filesystems. Not clear where\n> +\t\t * that decision should be made though? For now just use a cutoff of\n> +\t\t * 8, anything between 4 and 8 worked OK in some local testing.\n>\n> The chose is quite similar to what FileFallocate() makes. However, I'm\n> not sure FileFallocate() itself should be doing this.\n\nI'm not following - there's no such choice in FileFallocate()? Do you mean\nthat FileFallocate() also falls back to FileZero()? I don't think those are\ncomparable.\n\nWe don't want the code outside of fd.c to have to implement a fallback for\nplatforms that don't have fallocate (or something similar), that's why\nFileFallocate() falls back to FileZero().\n\nHere we care about not calling FileFallocate() in too small increments, when\nthe relation might extend further. If we somehow knew in mdzeroextend() that\nthe file won't be extended further, it'd be a good idea to call\nFileFallocate(), even if just for a single block - it'd prevent the kernel\nfrom wasting memory for delayed allocation. But unfortunately we don't know\nif it's the final size, hence the heuristic.\n\nDoes that make sense?\n\n\nThanks!\n\nAndres\n\n\n",
"msg_date": "Tue, 28 Mar 2023 21:13:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 11:47 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-03-26 17:42:45 -0400, Melanie Plageman wrote:\n> > + {\n> > + /* if errno is unset, assume problem is no disk space */\n> > + if (errno == 0)\n> > + errno = ENOSPC;\n> > + return -1;\n> > + }\n> >\n> > +int\n> > +FileFallocate(File file, off_t offset, off_t amount, uint32 wait_event_info)\n> > +{\n> > +#ifdef HAVE_POSIX_FALLOCATE\n> > + int returnCode;\n> > +\n> > + Assert(FileIsValid(file));\n> > + returnCode = FileAccess(file);\n> > + if (returnCode < 0)\n> > + return returnCode;\n> > +\n> > + pgstat_report_wait_start(wait_event_info);\n> > + returnCode = posix_fallocate(VfdCache[file].fd, offset, amount);\n> > + pgstat_report_wait_end();\n> > +\n> > + if (returnCode == 0)\n> > + return 0;\n> > +\n> > + /* for compatibility with %m printing etc */\n> > + errno = returnCode;\n> > +\n> > + /*\n> > + * Return in cases of a \"real\" failure, if fallocate is not supported,\n> > + * fall through to the FileZero() backed implementation.\n> > + */\n> > + if (returnCode != EINVAL && returnCode != EOPNOTSUPP)\n> > + return returnCode;\n> >\n> > I'm pretty sure you can just delete the below if statement\n> >\n> > + if (returnCode == 0 ||\n> > + (returnCode != EINVAL && returnCode != EINVAL))\n> > + return returnCode;\n>\n> Hm. I don't see how - wouldn't that lead us to call FileZero(), even if\n> FileFallocate() succeeded or failed (rather than not being supported)?\n\nUh...I'm confused...maybe my eyes aren't working. If returnCode was 0,\nyou already would have returned and if returnCode wasn't EINVAL, you\nalso already would have returned.\nNot to mention (returnCode != EINVAL && returnCode != EINVAL) contains\ntwo identical operands.\n\n> > +void\n> > +mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n> > + BlockNumber blocknum, int nblocks, bool skipFsync)\n> >\n> > So, I think there are a few too many local variables in here, and it\n> > actually makes it more confusing.\n> > Assuming you would like to keep the input parameters blocknum and\n> > nblocks unmodified for debugging/other reasons, here is a suggested\n> > refactor of this function\n>\n> I'm mostly adopting this.\n>\n>\n> > Also, I think you can combine the two error cases (I don't know if the\n> > user cares what you were trying to extend the file with).\n>\n> Hm. I do find it a somewhat useful distinction for figuring out problems - we\n> haven't used posix_fallocate for files so far, it seems plausible we'd hit\n> some portability issues. We could make it an errdetail(), I guess?\n\nI think that would be clearer.\n\n> > From ad7cd10a6c340d7f7d0adf26d5e39224dfd8439d Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Wed, 26 Oct 2022 12:05:07 -0700\n> > Subject: [PATCH v5 05/15] bufmgr: Add Pin/UnpinLocalBuffer()\n> >\n> > diff --git a/src/backend/storage/buffer/bufmgr.c\n> > b/src/backend/storage/buffer/bufmgr.c\n> > index fa20fab5a2..6f50dbd212 100644\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n> > @@ -4288,18 +4268,16 @@ ConditionalLockBuffer(Buffer buffer)\n> > }\n> >\n> > void\n> > -BufferCheckOneLocalPin(Buffer buffer)\n> > +BufferCheckWePinOnce(Buffer buffer)\n> >\n> > This name is weird. Who is we?\n>\n> The current backend. I.e. the function checks the current backend pins the\n> buffer exactly once, rather that *any* backend pins it once.\n>\n> I now see that BufferIsPinned() is named, IMO, misleadingly, more generally,\n> even though it also just applies to pins by the current backend.\n\nMaybe there is a way to use \"self\" instead of a pronoun? But, if you\nfeel quite strongly about a pronoun, I think \"we\" implies more than one\nbackend, so \"I\" would be better.\n\n> > @@ -1595,6 +1413,237 @@ retry:\n> > StrategyFreeBuffer(buf);\n> > }\n> >\n> > +/*\n> > + * Helper routine for GetVictimBuffer()\n> > + *\n> > + * Needs to be called on a buffer with a valid tag, pinned, but without the\n> > + * buffer header spinlock held.\n> > + *\n> > + * Returns true if the buffer can be reused, in which case the buffer is only\n> > + * pinned by this backend and marked as invalid, false otherwise.\n> > + */\n> > +static bool\n> > +InvalidateVictimBuffer(BufferDesc *buf_hdr)\n> > +{\n> > + /*\n> > + * Clear out the buffer's tag and flags and usagecount. This is not\n> > + * strictly required, as BM_TAG_VALID/BM_VALID needs to be checked before\n> > + * doing anything with the buffer. But currently it's beneficial as the\n> > + * pre-check for several linear scans of shared buffers just checks the\n> > + * tag.\n> >\n> > I don't really understand the above comment -- mainly the last sentence.\n>\n> To start with, it's s/checks/check/\n>\n> \"linear scans\" is a reference to functions like DropRelationBuffers(), which\n> iterate over all buffers, and just check the tag for a match. If we leave the\n> tag around, it'll still work, as InvalidateBuffer() etc will figure out that\n> the buffer is invalid. But of course that's slower then just skipping the\n> buffer \"early on\".\n\nAh. I see the updated comment on your branch and find it to be more clear.\n\n> > @@ -4709,8 +4704,6 @@ TerminateBufferIO(BufferDesc *buf, bool\n> > clear_dirty, uint32 set_flag_bits)\n> > {\n> > uint32 buf_state;\n> >\n> > I noticed that the comment above TermianteBufferIO() says\n> > * TerminateBufferIO: release a buffer we were doing I/O on\n> > * (Assumptions)\n> > * My process is executing IO for the buffer\n> >\n> > Can we still say this is an assumption? What about when it is being\n> > cleaned up after being called from AbortBufferIO()\n>\n> That hasn't really changed - it was already called by AbortBufferIO().\n>\n> I think it's still correct, too. We must have marked the IO as being in\n> progress to get there.\n\nOh, no I meant the \"my process is executing IO for the buffer\" --\ncouldn't another backend clear IO_IN_PROGRESS (i.e. not the original one\nwho set it on all the victim buffers)?\n\n> > Also, I realize that existing code in this file has the extraneous\n> > parantheses, but maybe it isn't worth staying consistent with that?\n> > as in: &(owner->bufferioarr)\n>\n> I personally don't find it worth being consistent with that, but if you /\n> others think it is, I'd be ok with adapting to that.\n\nRight, so I'm saying you should remove the extraneous parentheses in the\ncode you added.\n\n> > From f26d1fa7e528d04436402aa8f94dc2442999dde3 Mon Sep 17 00:00:00 2001\n> > From: Andres Freund <andres@anarazel.de>\n> > Date: Wed, 1 Mar 2023 13:24:19 -0800\n> > Subject: [PATCH v5 09/15] bufmgr: Move relation extension handling into\n> > ExtendBufferedRel{By,To,}\n> > + * Flags influencing the behaviour of ExtendBufferedRel*\n> > + */\n> > +typedef enum ExtendBufferedFlags\n> > +{\n> > + /*\n> > + * Don't acquire extension lock. This is safe only if the relation isn't\n> > + * shared, an access exclusive lock is held or if this is the startup\n> > + * process.\n> > + */\n> > + EB_SKIP_EXTENSION_LOCK = (1 << 0),\n> > +\n> > + /* Is this extension part of recovery? */\n> > + EB_PERFORMING_RECOVERY = (1 << 1),\n> > +\n> > + /*\n> > + * Should the fork be created if it does not currently exist? This likely\n> > + * only ever makes sense for relation forks.\n> > + */\n> > + EB_CREATE_FORK_IF_NEEDED = (1 << 2),\n> > +\n> > + /* Should the first (possibly only) return buffer be returned locked? */\n> > + EB_LOCK_FIRST = (1 << 3),\n> > +\n> > + /* Should the smgr size cache be cleared? */\n> > + EB_CLEAR_SIZE_CACHE = (1 << 4),\n> > +\n> > + /* internal flags follow */\n> >\n> > I don't understand what this comment means (\"internal flags follow\")\n>\n> Hm - just that the flags defined subsequently are for internal use, not for\n> callers to specify.\n\nIf EB_LOCK_TARGET is the only one of these, it might be more clear, for\nnow, to just say \"an internal flag\" or \"for internal use\" above\nEB_LOCK_TARGET, since it is the only one.\n\n> > - /* Without FSM, always fall out of the loop and extend */\n> > - if (!use_fsm)\n> > - break;\n> > + if (bistate\n> > + && bistate->next_free != InvalidBlockNumber\n> > + && bistate->next_free <= bistate->last_free)\n> > + {\n> > + /*\n> > + * We bulk extended the relation before, and there are still some\n> > + * unused pages from that extension, so we don't need to look in\n> > + * the FSM for a new page. But do record the free space from the\n> > + * last page, somebody might insert narrower tuples later.\n> > + */\n> >\n> > Why couldn't we have found out that we bulk-extended before and get the\n> > block from there up above the while loop?\n>\n> I'm not quite sure I follow - above the loop there might still have been space\n> on the prior page? We also need the ability to loop if the space has been used\n> since.\n>\n> I guess there's an argument for also checking above the loop, but I don't\n> think that'd currently ever be reachable.\n\nMy idea was that directly below this code in RelationGetBufferForTuple():\n\n if (bistate && bistate->current_buf != InvalidBuffer)\n targetBlock = BufferGetBlockNumber(bistate->current_buf);\n else\n targetBlock = RelationGetTargetBlock(relation);\n\nWe could check bistate->next_free instead of checking the freespace map first.\n\nBut, perhaps we couldn't hit this because we would have already have set\ncurrent_buf if we had a next_free?\n\n- Melanie\n\n\n",
"msg_date": "Wed, 29 Mar 2023 20:51:04 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-29 20:51:04 -0400, Melanie Plageman wrote:\n> > > + if (returnCode == 0)\n> > > + return 0;\n> > > +\n> > > + /* for compatibility with %m printing etc */\n> > > + errno = returnCode;\n> > > +\n> > > + /*\n> > > + * Return in cases of a \"real\" failure, if fallocate is not supported,\n> > > + * fall through to the FileZero() backed implementation.\n> > > + */\n> > > + if (returnCode != EINVAL && returnCode != EOPNOTSUPP)\n> > > + return returnCode;\n> > >\n> > > I'm pretty sure you can just delete the below if statement\n> > >\n> > > + if (returnCode == 0 ||\n> > > + (returnCode != EINVAL && returnCode != EINVAL))\n> > > + return returnCode;\n> >\n> > Hm. I don't see how - wouldn't that lead us to call FileZero(), even if\n> > FileFallocate() succeeded or failed (rather than not being supported)?\n> \n> Uh...I'm confused...maybe my eyes aren't working. If returnCode was 0,\n> you already would have returned and if returnCode wasn't EINVAL, you\n> also already would have returned.\n> Not to mention (returnCode != EINVAL && returnCode != EINVAL) contains\n> two identical operands.\n\nI'm afraid it was not your eyes that weren't working...\n\n\n> > > void\n> > > -BufferCheckOneLocalPin(Buffer buffer)\n> > > +BufferCheckWePinOnce(Buffer buffer)\n> > >\n> > > This name is weird. Who is we?\n> >\n> > The current backend. I.e. the function checks the current backend pins the\n> > buffer exactly once, rather that *any* backend pins it once.\n> >\n> > I now see that BufferIsPinned() is named, IMO, misleadingly, more generally,\n> > even though it also just applies to pins by the current backend.\n> \n> Maybe there is a way to use \"self\" instead of a pronoun? But, if you\n> feel quite strongly about a pronoun, I think \"we\" implies more than one\n> backend, so \"I\" would be better.\n\nI have no strong feelings around this in any form :)\n\n\n\n\n> > > @@ -4709,8 +4704,6 @@ TerminateBufferIO(BufferDesc *buf, bool\n> > > clear_dirty, uint32 set_flag_bits)\n> > > {\n> > > uint32 buf_state;\n> > >\n> > > I noticed that the comment above TermianteBufferIO() says\n> > > * TerminateBufferIO: release a buffer we were doing I/O on\n> > > * (Assumptions)\n> > > * My process is executing IO for the buffer\n> > >\n> > > Can we still say this is an assumption? What about when it is being\n> > > cleaned up after being called from AbortBufferIO()\n> >\n> > That hasn't really changed - it was already called by AbortBufferIO().\n> >\n> > I think it's still correct, too. We must have marked the IO as being in\n> > progress to get there.\n> \n> Oh, no I meant the \"my process is executing IO for the buffer\" --\n> couldn't another backend clear IO_IN_PROGRESS (i.e. not the original one\n> who set it on all the victim buffers)?\n\nNo. Or at least not yet ;) - with AIO we will... Only the IO issuing backend\ncurrently is allowed to reset IO_IN_PROGRESS.\n\n\n> > > + /* Should the smgr size cache be cleared? */\n> > > + EB_CLEAR_SIZE_CACHE = (1 << 4),\n> > > +\n> > > + /* internal flags follow */\n> > >\n> > > I don't understand what this comment means (\"internal flags follow\")\n> >\n> > Hm - just that the flags defined subsequently are for internal use, not for\n> > callers to specify.\n> \n> If EB_LOCK_TARGET is the only one of these, it might be more clear, for\n> now, to just say \"an internal flag\" or \"for internal use\" above\n> EB_LOCK_TARGET, since it is the only one.\n\nI am quite certain it won't be the only one...\n\n\n> > > - /* Without FSM, always fall out of the loop and extend */\n> > > - if (!use_fsm)\n> > > - break;\n> > > + if (bistate\n> > > + && bistate->next_free != InvalidBlockNumber\n> > > + && bistate->next_free <= bistate->last_free)\n> > > + {\n> > > + /*\n> > > + * We bulk extended the relation before, and there are still some\n> > > + * unused pages from that extension, so we don't need to look in\n> > > + * the FSM for a new page. But do record the free space from the\n> > > + * last page, somebody might insert narrower tuples later.\n> > > + */\n> > >\n> > > Why couldn't we have found out that we bulk-extended before and get the\n> > > block from there up above the while loop?\n> >\n> > I'm not quite sure I follow - above the loop there might still have been space\n> > on the prior page? We also need the ability to loop if the space has been used\n> > since.\n> >\n> > I guess there's an argument for also checking above the loop, but I don't\n> > think that'd currently ever be reachable.\n> \n> My idea was that directly below this code in RelationGetBufferForTuple():\n> \n> if (bistate && bistate->current_buf != InvalidBuffer)\n> targetBlock = BufferGetBlockNumber(bistate->current_buf);\n> else\n> targetBlock = RelationGetTargetBlock(relation);\n> \n> We could check bistate->next_free instead of checking the freespace map first.\n> \n> But, perhaps we couldn't hit this because we would have already have set\n> current_buf if we had a next_free?\n\nCorrect. I think it might be worth doing a larger refactoring of that function\nat some point not too far away...\n\nIt's definitely somewhat sad that we spend time locking the buffer, recheck\npins etc, for the callers like heap_multi_insert() and heap_update() that\nalready know that the page is full. But that seems like independent enough\nthat I'd not tackle it now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Mar 2023 18:03:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nAttached is v6. Changes:\n\n- Try to address Melanie and Horiguchi-san's review. I think there's one or\n two further things that need to be done\n\n- Avoided inserting newly extended pages into the FSM while holding a buffer\n lock. If we need to do so, we now drop the buffer lock and recheck if there\n still is space (very very likely). See also [1]. I use the infrastructure\n introduced over in that in this patchset.\n\n- Lots of comment and commit message polishing. More needed, particularly for\n the latter, but ...\n\n- Added a patch to fix the pre-existing undefined behaviour in localbuf.c that\n Melanie pointed out. Plan to commit that soon.\n\n- Added a patch to fix some pre-existing DO_DB() format code issues. Plan to\n commit that soon.\n\n\nI did some benchmarking on \"bufmgr: Acquire and clean victim buffer\nseparately\" in isolation. For workloads that do a *lot* of reads, that proves\nto be a substantial benefit on its own. For the, obviously unrealistically\nextreme, workload of N backends doing\n SELECT pg_prewarm('pgbench_accounts', 'buffer');\nin a scale 100 database (with a 1281MB pgbench_accounts) and shared_buffers of\n128MB, I see > 2x gains at 128, 512 clients. Of course realistic workloads\nwill have much smaller gains, but it's still good to see.\n\n\nLooking at the patchset, I am mostly happy with the breakdown into individual\ncommits. However \"bufmgr: Move relation extension handling into\nExtendBufferedRel{By,To,}\" is quite large. But I don't quite see how to break\nit into smaller pieces without making things awkward (e.g. due to static\nfunctions being unused, or temporarily duplicating the code doing relation\nextensions).\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20230325025740.wzvchp2kromw4zqz%40awork3.anarazel.de",
"msg_date": "Wed, 29 Mar 2023 20:02:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> Attached is v6. Changes:\n\n0006:\n\n+ ereport(ERROR,\n+ errcode_for_file_access(),\n+ errmsg(\"could not extend file \\\"%s\\\" with posix_fallocate():\n%m\",\n+ FilePathName(v->mdfd_vfd)),\n+ errhint(\"Check free disk space.\"));\n\nPortability nit: mdzeroextend() doesn't know whether posix_fallocate() was\nused in FileFallocate().\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Mar 30, 2023 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:> Attached is v6. Changes:0006:+ ereport(ERROR,+ errcode_for_file_access(),+ errmsg(\"could not extend file \\\"%s\\\" with posix_fallocate(): %m\",+ FilePathName(v->mdfd_vfd)),+ errhint(\"Check free disk space.\"));Portability nit: mdzeroextend() doesn't know whether posix_fallocate() was used in FileFallocate().--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 30 Mar 2023 12:28:57 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-30 12:28:57 +0700, John Naylor wrote:\n> On Thu, Mar 30, 2023 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > Attached is v6. Changes:\n> \n> 0006:\n> \n> + ereport(ERROR,\n> + errcode_for_file_access(),\n> + errmsg(\"could not extend file \\\"%s\\\" with posix_fallocate():\n> %m\",\n> + FilePathName(v->mdfd_vfd)),\n> + errhint(\"Check free disk space.\"));\n> \n> Portability nit: mdzeroextend() doesn't know whether posix_fallocate() was\n> used in FileFallocate().\n\nFair point. I would however like to see a different error message for the two\nways of extending, at least initially. What about referencing FileFallocate()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Apr 2023 17:32:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-29 20:02:33 -0700, Andres Freund wrote:\n> Attached is v6. Changes:\n\nAttached is v7. Not much in the way of changes:\n- polished a lot of the commit messages\n- reordered commits to not be blocked as immediately by\n https://www.postgresql.org/message-id/20230325025740.wzvchp2kromw4zqz%40awork3.anarazel.de\n- Used the new relation extension function in two more places (finding an\n independent bug on the way), not sure why I didn't convert those earlier...\n\nI'm planning to push the patches up to the hio.c changes soon, unless somebody\nwould like me to hold off.\n\nAfter that I'm planning to wait for a buildfarm cycle, and push the changes\nnecessary to use bulk extension in hio.c (the main win).\n\nI might split the patch to use ExtendBufferedRelTo() into two, one for\nvm_extend() and fsm_extend(), and one for xlogutils.c. The latter is more\ncomplicated and has more of a complicated history (see [1]).\n\nGreetings,\n\nAndres Freund\n\n\nhttps://www.postgresql.org/message-id/20230223010147.32oir7sb66slqnjk%40awork3.anarazel.de",
"msg_date": "Tue, 4 Apr 2023 17:39:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 7:32 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > Portability nit: mdzeroextend() doesn't know whether posix_fallocate()\nwas\n> > used in FileFallocate().\n>\n> Fair point. I would however like to see a different error message for the\ntwo\n> ways of extending, at least initially. What about referencing\nFileFallocate()?\n\nSeems logical.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Apr 5, 2023 at 7:32 AM Andres Freund <andres@anarazel.de> wrote:> > Portability nit: mdzeroextend() doesn't know whether posix_fallocate() was> > used in FileFallocate().>> Fair point. I would however like to see a different error message for the two> ways of extending, at least initially. What about referencing FileFallocate()?Seems logical.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 5 Apr 2023 10:15:57 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-04 17:39:45 -0700, Andres Freund wrote:\n> I'm planning to push the patches up to the hio.c changes soon, unless somebody\n> would like me to hold off.\n\nDone that.\n\n\n> After that I'm planning to wait for a buildfarm cycle, and push the changes\n> necessary to use bulk extension in hio.c (the main win).\n\nWorking on that. Might end up being tomorrow.\n\n\n> I might split the patch to use ExtendBufferedRelTo() into two, one for\n> vm_extend() and fsm_extend(), and one for xlogutils.c. The latter is more\n> complicated and has more of a complicated history (see [1]).\n\nI've pushed the vm_extend() and fsm_extend() piece, and did split out the\nxlogutils.c case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 18:46:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-05 18:46:16 -0700, Andres Freund wrote:\n> On 2023-04-04 17:39:45 -0700, Andres Freund wrote:\n> > After that I'm planning to wait for a buildfarm cycle, and push the changes\n> > necessary to use bulk extension in hio.c (the main win).\n> \n> Working on that. Might end up being tomorrow.\n\nIt did. So far no complaints from the buildfarm. Although I pushed the last\npiece just now...\n\nBesides editing the commit message, not a lot of changes between what I posted\nlast and what I pushed. A few typos and awkward sentences, code formatting,\netc. I did change the API of RelationAddBlocks() to set *did_unlock = false,\nif it didn't unlock (and thus removed setting it in the caller).\n\n\n> > I might split the patch to use ExtendBufferedRelTo() into two, one for\n> > vm_extend() and fsm_extend(), and one for xlogutils.c. The latter is more\n> > complicated and has more of a complicated history (see [1]).\n> \n> I've pushed the vm_extend() and fsm_extend() piece, and did split out the\n> xlogutils.c case.\n\nWhich I pushed, just now. I did perform manual testing with creating\ndisconnected segments on the standby, and checking that everything behaves\nwell in that case.\n\n\nI think it might be worth having a C test for some of the bufmgr.c API. Things\nlike testing that retrying a failed relation extension works the second time\nround.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Apr 2023 18:15:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-06 18:15:14 -0700, Andres Freund wrote:\n> I think it might be worth having a C test for some of the bufmgr.c API. Things\n> like testing that retrying a failed relation extension works the second time\n> round.\n\nA few hours after this I hit a stupid copy-pasto (21d7c05a5cf) that would\nhopefully have been uncovered by such a test...\n\nI guess we could even test this specific instance without a more complicated\nframework. Create table with some data, rename the file, checkpoint - should\nfail, rename back, checkpoint - should succeed.\n\nIt's much harder to exercise the error paths inside the backend extending the\nrelation unfortunately, because we require the file to be opened rw before\ndoing much. And once the FD is open, removing the permissions doesn't help.\nThe least complicated approach I scan see is creating directory qutoas, but\nthat's quite file system specific...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 01:39:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi Andres,\n\n07.04.2023 11:39, Andres Freund wrote:\n> Hi,\n>\n> On 2023-04-06 18:15:14 -0700, Andres Freund wrote:\n>> I think it might be worth having a C test for some of the bufmgr.c API. Things\n>> like testing that retrying a failed relation extension works the second time\n>> round.\n> A few hours after this I hit a stupid copy-pasto (21d7c05a5cf) that would\n> hopefully have been uncovered by such a test...\n\nA few days later I've found a new defect introduced with 31966b151.\nThe following script:\necho \"\nCREATE TABLE tbl(id int);\nINSERT INTO tbl(id) SELECT i FROM generate_series(1, 1000) i;\nDELETE FROM tbl;\nCHECKPOINT;\n\" | psql -q\n\nsleep 2\n\ngrep -C2 'automatic vacuum of table \".*.tbl\"' server.log\n\ntf=$(psql -Aqt -c \"SELECT format('%s/%s', pg_database.oid, relfilenode) FROM pg_database, pg_class WHERE datname = \ncurrent_database() AND relname = 'tbl'\")\nls -l \"$PGDB/base/$tf\"\n\npg_ctl -D $PGDB stop -m immediate\npg_ctl -D $PGDB -l server.log start\n\nwith the autovacuum enabled as follows:\nautovacuum = on\nlog_autovacuum_min_duration = 0\nautovacuum_naptime = 1\n\ngives:\n2023-04-11 20:56:56.261 MSK [675708] LOG: checkpoint starting: immediate force wait\n2023-04-11 20:56:56.324 MSK [675708] LOG: checkpoint complete: wrote 900 buffers (5.5%); 0 WAL file(s) added, 0 \nremoved, 0 recycled; write=0.016 s, sync=0.034 s, total=0.063 s; sync files=252, longest=0.017 s, average=0.001 s; \ndistance=4162 kB, estimate=4162 kB; lsn=0/1898588, redo lsn=0/1898550\n2023-04-11 20:56:57.558 MSK [676060] LOG: automatic vacuum of table \"testdb.public.tbl\": index scans: 0\n pages: 5 removed, 0 remain, 5 scanned (100.00% of total)\n tuples: 1000 removed, 0 remain, 0 are dead but not yet removable\n-rw------- 1 law law 0 апр 11 20:56 .../tmpdb/base/16384/16385\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to start.... stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\nserver stops with the following stack trace:\nCore was generated by `postgres: startup recovering 000000010000000000000001 '.\nProgram terminated with signal SIGABRT, Aborted.\n\nwarning: Section `.reg-xstate/790626' in core file too small.\n#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140209454906240) at ./nptl/pthread_kill.c:44\n44 ./nptl/pthread_kill.c: No such file or directory.\n(gdb) bt\n#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140209454906240) at ./nptl/pthread_kill.c:44\n#1 __pthread_kill_internal (signo=6, threadid=140209454906240) at ./nptl/pthread_kill.c:78\n#2 __GI___pthread_kill (threadid=140209454906240, signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n#3 0x00007f850ec53476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#4 0x00007f850ec397f3 in __GI_abort () at ./stdlib/abort.c:79\n#5 0x0000557950889c0b in ExceptionalCondition (\n conditionName=0x557950a67680 \"mode == RBM_NORMAL || mode == RBM_ZERO_AND_LOCK || mode == RBM_ZERO_ON_ERROR\",\n fileName=0x557950a673e8 \"bufmgr.c\", lineNumber=1008) at assert.c:66\n#6 0x000055795064f2d0 in ReadBuffer_common (smgr=0x557952739f38, relpersistence=112 'p', forkNum=MAIN_FORKNUM,\n blockNum=4294967295, mode=RBM_ZERO_AND_CLEANUP_LOCK, strategy=0x0, hit=0x7fff22dd648f) at bufmgr.c:1008\n#7 0x000055795064ebe7 in ReadBufferWithoutRelcache (rlocator=..., forkNum=MAIN_FORKNUM, blockNum=4294967295,\n mode=RBM_ZERO_AND_CLEANUP_LOCK, strategy=0x0, permanent=true) at bufmgr.c:800\n#8 0x000055795021c0fa in XLogReadBufferExtended (rlocator=..., forknum=MAIN_FORKNUM, blkno=0,\n mode=RBM_ZERO_AND_CLEANUP_LOCK, recent_buffer=0) at xlogutils.c:536\n#9 0x000055795021bd92 in XLogReadBufferForRedoExtended (record=0x5579526c4998, block_id=0 '\\000', mode=RBM_NORMAL,\n get_cleanup_lock=true, buf=0x7fff22dd6598) at xlogutils.c:391\n#10 0x00005579501783b1 in heap_xlog_prune (record=0x5579526c4998) at heapam.c:8726\n#11 0x000055795017b7db in heap2_redo (record=0x5579526c4998) at heapam.c:9960\n#12 0x0000557950215b34 in ApplyWalRecord (xlogreader=0x5579526c4998, record=0x7f85053d0120, replayTLI=0x7fff22dd6720)\n at xlogrecovery.c:1915\n#13 0x0000557950215611 in PerformWalRecovery () at xlogrecovery.c:1746\n#14 0x0000557950201ce3 in StartupXLOG () at xlog.c:5433\n#15 0x00005579505cb6d2 in StartupProcessMain () at startup.c:267\n#16 0x00005579505be9f7 in AuxiliaryProcessMain (auxtype=StartupProcess) at auxprocess.c:141\n#17 0x00005579505ca2b5 in StartChildProcess (type=StartupProcess) at postmaster.c:5369\n#18 0x00005579505c5224 in PostmasterMain (argc=3, argv=0x5579526c3e70) at postmaster.c:1455\n#19 0x000055795047a97d in main (argc=3, argv=0x5579526c3e70) at main.c:200\n\nAs I can see, autovacuum removes pages from the table file, and this causes\nthe crash while replaying the record:\nrmgr: Heap2 len (rec/tot): 60/ 988, tx: 0, lsn: 0/01898600, prev 0/01898588, desc: PRUNE \nsnapshotConflictHorizon 732 nredirected 0 ndead 226, blkref #0: rel 1663/16384/16385 blk 0 FPW\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 11 Apr 2023 22:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 22:00:00 +0300, Alexander Lakhin wrote:\n> A few days later I've found a new defect introduced with 31966b151.\n\nThat's the same issue that Tom also just reported, at\nhttps://postgr.es/m/392271.1681238924%40sss.pgh.pa.us\n\nAttached is my WIP fix, including a test.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 11 Apr 2023 16:21:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "12.04.2023 02:21, Andres Freund wrote:\n> Hi,\n>\n> On 2023-04-11 22:00:00 +0300, Alexander Lakhin wrote:\n>> A few days later I've found a new defect introduced with 31966b151.\n> That's the same issue that Tom also just reported, at\n> https://postgr.es/m/392271.1681238924%40sss.pgh.pa.us\n>\n> Attached is my WIP fix, including a test.\n\nThanks for the fix. I can confirm that the issue is gone.\nReadBuffer_common() contains an Assert(), that is similar to the fixed one,\nbut it looks unreachable for the WAL replay case after 26158b852.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 12 Apr 2023 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-12 08:00:00 +0300, Alexander Lakhin wrote:\n> 12.04.2023 02:21, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-04-11 22:00:00 +0300, Alexander Lakhin wrote:\n> > > A few days later I've found a new defect introduced with 31966b151.\n> > That's the same issue that Tom also just reported, at\n> > https://postgr.es/m/392271.1681238924%40sss.pgh.pa.us\n> > \n> > Attached is my WIP fix, including a test.\n> \n> Thanks for the fix. I can confirm that the issue is gone.\n> ReadBuffer_common() contains an Assert(), that is similar to the fixed one,\n> but it looks unreachable for the WAL replay case after 26158b852.\n\nGood catch. I implemented it there too. As now all of the modes are supported,\nI removed the assertion.\n\nI also extended the test slightly to also test the case of dropped relations.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Apr 2023 11:38:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nCould you please share repro steps for running these benchmarks? I am doing performance testing in this area and want to use the same benchmarks.\n\nThanks,\nMuhammad\n________________________________\nFrom: Andres Freund <andres@anarazel.de>\nSent: Friday, October 28, 2022 7:54 PM\nTo: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; Thomas Munro <thomas.munro@gmail.com>; Melanie Plageman <melanieplageman@gmail.com>\nCc: Yura Sokolov <y.sokolov@postgrespro.ru>; Robert Haas <robertmhaas@gmail.com>\nSubject: refactoring relation extension and BufferAlloc(), faster COPY\n\nHi,\n\nI'm working to extract independently useful bits from my AIO work, to reduce\nthe size of that patchset. This is one of those pieces.\n\nIn workloads that extend relations a lot, we end up being extremely contended\non the relation extension lock. We've attempted to address that to some degree\nby using batching, which helps, but only so much.\n\nThe fundamental issue, in my opinion, is that we do *way* too much while\nholding the relation extension lock. We acquire a victim buffer, if that\nbuffer is dirty, we potentially flush the WAL, then write out that\nbuffer. Then we zero out the buffer contents. Call smgrextend().\n\nMost of that work does not actually need to happen while holding the relation\nextension lock. As far as I can tell, the minimum that needs to be covered by\nthe extension lock is the following:\n\n1) call smgrnblocks()\n2) insert buffer[s] into the buffer mapping table at the location returned by\n smgrnblocks\n3) mark buffer[s] as IO_IN_PROGRESS\n\n\n1) obviously has to happen with the relation extension lock held because\notherwise we might miss another relation extension. 2+3) need to happen with\nthe lock held, because otherwise another backend not doing an extension could\nread the block before we're done extending, dirty it, write it out, and then\nhave it overwritten by the extending backend.\n\n\nThe reason we currently do so much work while holding the relation extension\nlock is that bufmgr.c does not know about the relation lock and that relation\nextension happens entirely within ReadBuffer* - there's no way to use a\nnarrower scope for the lock.\n\n\nMy fix for that is to add a dedicated function for extending relations, that\ncan acquire the extension lock if necessary (callers can tell it to skip that,\ne.g., when initially creating an init fork). This routine is called by\nReadBuffer_common() when P_NEW is passed in, to provide backward\ncompatibility.\n\n\nTo be able to acquire victim buffers outside of the extension lock, victim\nbuffers are now acquired separately from inserting the new buffer mapping\nentry. Victim buffer are pinned, cleaned, removed from the buffer mapping\ntable and marked invalid. Because they are pinned, clock sweeps in other\nbackends won't return them. This is done in a new function,\n[Local]BufferAlloc().\n\nThis is similar to Yuri's patch at [0], but not that similar to earlier or\nlater approaches in that thread. I don't really understand why that thread\nwent on to ever more complicated approaches, when the basic approach shows\nplenty gains, with no issues around the number of buffer mapping entries that\ncan exist etc.\n\n\n\nOther interesting bits I found:\n\na) For workloads that [mostly] fit into s_b, the smgwrite() that BufferAlloc()\n does, nearly doubles the amount of writes. First the kernel ends up writing\n out all the zeroed out buffers after a while, then when we write out the\n actual buffer contents.\n\n The best fix for that seems to be to optionally use posix_fallocate() to\n reserve space, without dirtying pages in the kernel page cache. However, it\n looks like that's only beneficial when extending by multiple pages at once,\n because it ends up causing one filesystem-journal entry for each extension\n on at least some filesystems.\n\n I added 'smgrzeroextend()' that can extend by multiple blocks, without the\n caller providing a buffer to write out. When extending by 8 or more blocks,\n posix_fallocate() is used if available, otherwise pg_pwritev_with_retry() is\n used to extend the file.\n\n\nb) I found that is quite beneficial to bulk-extend the relation with\n smgrextend() even without concurrency. The reason for that is the primarily\n the aforementioned dirty buffers that our current extension method causes.\n\n One bit that stumped me for quite a while is to know how much to extend the\n relation by. RelationGetBufferForTuple() drives the decision whether / how\n much to bulk extend purely on the contention on the extension lock, which\n obviously does not work for non-concurrent workloads.\n\n After quite a while I figured out that we actually have good information on\n how much to extend by, at least for COPY /\n heap_multi_insert(). heap_multi_insert() can compute how much space is\n needed to store all tuples, and pass that on to\n RelationGetBufferForTuple().\n\n For that to be accurate we need to recompute that number whenever we use an\n already partially filled page. That's not great, but doesn't appear to be a\n measurable overhead.\n\n\nc) Contention on the FSM and the pages returned by it is a serious bottleneck\n after a) and b).\n\n The biggest issue is that the current bulk insertion logic in hio.c enters\n all but one of the new pages into the freespacemap. That will immediately\n cause all the other backends to contend on the first few pages returned the\n FSM, and cause contention on the FSM pages itself.\n\n I've, partially, addressed that by using the information about the required\n number of pages from b). Whether we bulk insert or not, the number of pages\n we know we're going to need for one heap_multi_insert() don't need to be\n added to the FSM - we're going to use them anyway.\n\n I've stashed the number of free blocks in the BulkInsertState for now, but\n I'm not convinced that that's the right place.\n\n If I revert just this part, the \"concurrent COPY into unlogged table\"\n benchmark goes from ~240 tps to ~190 tps.\n\n\n Even after that change the FSM is a major bottleneck. Below I included\n benchmarks showing this by just removing the use of the FSM, but I haven't\n done anything further about it. The contention seems to be both from\n updating the FSM, as well as thundering-herd like symptoms from accessing\n the FSM.\n\n The update part could likely be addressed to some degree with a batch\n update operation updating the state for multiple pages.\n\n The access part could perhaps be addressed by adding an operation that gets\n a page and immediately marks it as fully used, so other backends won't also\n try to access it.\n\n\n\nd) doing\n /* new buffers are zero-filled */\n MemSet((char *) bufBlock, 0, BLCKSZ);\n\n under the extension lock is surprisingly expensive on my two socket\n workstation (but much less noticable on my laptop).\n\n If I move the MemSet back under the extension lock, the \"concurrent COPY\n into unlogged table\" benchmark goes from ~240 tps to ~200 tps.\n\n\ne) When running a few benchmarks for this email, I noticed that there was a\n sharp performance dropoff for the patched code for a pgbench -S -s100 on a\n database with 1GB s_b, start between 512 and 1024 clients. This started with\n the patch only acquiring one buffer partition lock at a time. Lots of\n debugging ensued, resulting in [3].\n\n The problem isn't actually related to the change, it just makes it more\n visible, because the \"lock chains\" between two partitions reduce the\n average length of the wait queues substantially, by distribution them\n between more partitions. [3] has a reproducer that's entirely independent\n of this patchset.\n\n\n\n\nBulk extension acquires a number of victim buffers, acquires the extension\nlock, inserts the buffers into the buffer mapping table and marks them as\nio-in-progress, calls smgrextend and releases the extension lock. After that\nbuffer[s] are locked (depending on mode and an argument indicating the number\nof blocks to be locked), and TerminateBufferIO() is called.\n\nThis requires two new pieces of infrastructure:\n\nFirst, pinning multiple buffers opens up the obvious danger that we might run\nof non-pinned buffers. I added LimitAdditional[Local]Pins() that allows each\nbackend to pin a proportional share of buffers (although always allowing one,\nas we do today).\n\nSecond, having multiple IOs in progress at the same time isn't possible with\nthe InProgressBuf mechanism. I added a ResourceOwnerRememberBufferIO() etc to\ndeal with that instead. I like that this ends up removing a lot of\nAbortBufferIO() calls from the loops of various aux processes (now released\ninside ReleaseAuxProcessResources()).\n\nIn very extreme workloads (single backend doing a pgbench -S -s 100 against a\ns_b=64MB database) the memory allocations triggered by StartBufferIO() are\n*just about* visible, not sure if that's worth worrying about - we do such\nallocations for the much more common pinning of buffers as well.\n\n\nThe new [Bulk]ExtendSharedRelationBuffered() currently have both a Relation\nand a SMgrRelation argument, requiring at least one of them to be set. The\nreason for that is on the one hand that LockRelationForExtension() requires a\nrelation and on the other hand, redo routines typically don't have a Relation\naround (recovery doesn't require an extension lock). That's not pretty, but\nseems a tad better than the ReadBufferExtended() vs\nReadBufferWithoutRelcache() mess.\n\n\n\nI've done a fair bit of benchmarking of this patchset. For COPY it comes out\nahead everywhere. It's possible that there's a very small regression for\nextremly IO miss heavy workloads, more below.\n\n\nserver \"base\" configuration:\n\nmax_wal_size=150GB\nshared_buffers=24GB\nhuge_pages=on\nautovacuum=0\nbackend_flush_after=2MB\nmax_connections=5000\nwal_buffers=128MB\nwal_segment_size=1GB\n\nbenchmark: pgbench running COPY into a single table. pgbench -t is set\naccording to the client count, so that the same amount of data is inserted.\nThis is done oth using small files ([1], ringbuffer not effective, no dirty\ndata to write out within the benchmark window) and a bit larger files ([2],\nlots of data to write out due to ringbuffer).\n\nTo make it a fair comparison HEAD includes the lwlock-waitqueue fix as well.\n\ns_b=24GB\n\ntest: unlogged_small_files, format: text, files: 1024, 9015MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 58.63 207 50.22 242 54.35 224\n2 32.67 372 25.82 472 27.30 446\n4 22.53 540 13.30 916 14.33 851\n8 15.14 804 7.43 1640 7.48 1632\n16 14.69 829 4.79 2544 4.50 2718\n32 15.28 797 4.41 2763 3.32 3710\n64 15.34 794 5.22 2334 3.06 4061\n128 15.49 786 4.97 2452 3.13 3926\n256 15.85 768 5.02 2427 3.26 3769\n512 16.02 760 5.29 2303 3.54 3471\n\ntest: logged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 68.18 178 59.41 205 63.43 192\n2 39.71 306 33.10 368 34.99 348\n4 27.26 446 19.75 617 20.09 607\n8 18.84 646 12.86 947 12.68 962\n16 15.96 763 9.62 1266 8.51 1436\n32 15.43 789 8.20 1486 7.77 1579\n64 16.11 756 8.91 1367 8.90 1383\n128 16.41 742 10.00 1218 9.74 1269\n256 17.33 702 11.91 1023 10.89 1136\n512 18.46 659 14.07 866 11.82 1049\n\ntest: unlogged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 63.27s 192 56.14 217 59.25 205\n2 40.17s 303 29.88 407 31.50 386\n4 27.57s 442 16.16 754 17.18 709\n8 21.26s 573 11.89 1025 11.09 1099\n16 21.25s 573 10.68 1141 10.22 1192\n32 21.00s 580 10.72 1136 10.35 1178\n64 20.64s 590 10.15 1200 9.76 1249\n128 skipped\n256 skipped\n512 skipped\n\ntest: logged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 71.89s 169 65.57 217 69.09 69.09\n2 47.36s 257 36.22 407 38.71 38.71\n4 33.10s 368 21.76 754 22.78 22.78\n8 26.62s 457 15.89 1025 15.30 15.30\n16 24.89s 489 17.08 1141 15.20 15.20\n32 25.15s 484 17.41 1136 16.14 16.14\n64 26.11s 466 17.89 1200 16.76 16.76\n128 skipped\n256 skipped\n512 skipped\n\n\nJust to see how far it can be pushed, with binary format we can now get to\nnearly 6GB/s into a table when disabling the FSM - note the 2x difference\nbetween patch and patch+no-fsm at 32 clients.\n\ntest: unlogged_small_files, format: binary, files: 1024, 9508MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 34.14 357 28.04 434 29.46 413\n2 22.67 537 14.42 845 14.75 826\n4 16.63 732 7.62 1599 7.69 1587\n8 13.48 904 4.36 2795 4.13 2959\n16 14.37 848 3.78 3224 2.74 4493\n32 14.79 823 4.20 2902 2.07 5974\n64 14.76 825 5.03 2423 2.21 5561\n128 14.95 815 4.36 2796 2.30 5343\n256 15.18 802 4.31 2828 2.49 4935\n512 15.41 790 4.59 2656 2.84 4327\n\n\ns_b=4GB\n\ntest: unlogged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 62.55 194 54.22 224\n2 37.11 328 28.94 421\n4 25.97 469 16.42 742\n8 20.01 609 11.92 1022\n16 19.55 623 11.05 1102\n32 19.34 630 11.27 1081\n64 19.07 639 12.04 1012\n128 19.22 634 12.27 993\n256 19.34 630 12.28 992\n512 19.60 621 11.74 1038\n\ntest: logged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 71.71 169 63.63 191\n2 46.93 259 36.31 335\n4 30.37 401 22.41 543\n8 22.86 533 16.90 721\n16 20.18 604 14.07 866\n32 19.64 620 13.06 933\n64 19.71 618 15.08 808\n128 19.95 610 15.47 787\n256 20.48 595 16.53 737\n512 21.56 565 16.86 722\n\ntest: unlogged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 62.65 194 55.74 218\n2 40.25 302 29.45 413\n4 27.37 445 16.26 749\n8 22.07 552 11.75 1037\n16 21.29 572 10.64 1145\n32 20.98 580 10.70 1139\n64 20.65 590 10.21 1193\n128 skipped\n256 skipped\n512 skipped\n\ntest: logged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 71.72 169 65.12 187\n2 46.46 262 35.74 341\n4 32.61 373 21.60 564\n8 26.69 456 16.30 747\n16 25.31 481 17.00 716\n32 24.96 488 17.47 697\n64 26.05 467 17.90 680\n128 skipped\n256 skipped\n512 skipped\n\n\ntest: unlogged_small_files, format: binary, files: 1024, 9505MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 37.62 323 32.77 371\n2 28.35 429 18.89 645\n4 20.87 583 12.18 1000\n8 19.37 629 10.38 1173\n16 19.41 627 10.36 1176\n32 18.62 654 11.04 1103\n64 18.33 664 11.89 1024\n128 18.41 661 11.91 1023\n256 18.52 658 12.10 1007\n512 18.78 648 11.49 1060\n\n\nbenchmark: Run a pgbench -S workload with scale 100, so it doesn't fit into\ns_b, thereby exercising BufferAlloc()'s buffer replacement path heavily.\n\n\nThe run-to-run variance on my workstation is high for this workload (both\nbefore/after my changes). I also found that the ramp-up time at higher client\ncounts is very significant:\nprogress: 2.1 s, 5816.8 tps, lat 1.835 ms stddev 4.450, 0 failed\nprogress: 3.0 s, 666729.4 tps, lat 5.755 ms stddev 16.753, 0 failed\nprogress: 4.0 s, 899260.1 tps, lat 3.619 ms stddev 41.108, 0 failed\n...\n\nOne would need to run pgbench for impractically long to make that effect\nvanish.\n\nMy not great solution for these was to run with -T21 -P5 and use the best 5s\nas the tps.\n\n\ns_b=1GB\n tps tps\nclients master patched\n1 49541 48805\n2 85342 90010\n4 167340 168918\n8 308194 303222\n16 524294 523678\n32 649516 649100\n64 932547 937702\n128 908249 906281\n256 856496 903979\n512 764254 934702\n1024 653886 925113\n2048 569695 917262\n4096 526782 903258\n\n\ns_b=128MB:\n tps tps\nclients master patched\n1 40407 39854\n2 73180 72252\n4 143334 140860\n8 240982 245331\n16 429265 420810\n32 544593 540127\n64 706408 726678\n128 713142 718087\n256 611030 695582\n512 552751 686290\n1024 508248 666370\n2048 474108 656735\n4096 448582 633040\n\n\nAs there might be a small regression at the smallest end, I ran a more extreme\nversion of the above. Using a pipelined pgbench -S, with a single client, for\nlonger. With s_b=8MB.\n\nTo further reduce noise I pinned the server to one cpu, the client to another\nand disabled turbo mode on the CPU.\n\nmaster \"total\" tps: 61.52\nmaster \"best 5s\" tps: 61.8\npatch \"total\" tps: 61.20\npatch \"best 5s\" tps: 61.4\n\nHardly conclusive, but it does look like there's a small effect. It could be\ncode layout or such.\n\nMy guess however is that it's the resource owner for in-progress IO that I\nadded - that adds an additional allocation inside the resowner machinery. I\ncommented those out (that's obviously incorrect!) just to see whether that\nchanges anything:\n\nno-resowner \"total\" tps: 62.03\nno-resowner \"best 5s\" tps: 62.2\n\nSo it looks like indeed, it's the resowner. I am a bit surprised, because\nobviously we already use that mechanism for pins, which obviously is more\nfrequent.\n\nI'm not sure it's worth worrying about - this is a pretty absurd workload. But\nif we decide it is, I can think of a few ways to address this. E.g.:\n\n- We could preallocate an initial element inside the ResourceArray struct, so\n that a newly created resowner won't need to allocate immediately\n- We could only use resowners if there's more than one IO in progress at the\n same time - but I don't like that idea much\n- We could try to store the \"in-progress\"-ness of a buffer inside the 'bufferpin'\n resowner entry - on 64bit system there's plenty space for that. But on 32bit systems...\n\n\nThe patches here aren't fully polished (as will be evident). But they should\nbe more than good enough to discuss whether this is a sane direction.\n\nGreetings,\n\nAndres Freund\n\n[0] https://postgr.es/m/3b108afd19fa52ed20c464a69f64d545e4a14772.camel%40postgrespro.ru\n[1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n[2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n[3] https://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nCould you please share repro steps for running these benchmarks? I am doing performance testing in this area and want to use the same benchmarks.\n\n\n\n\nThanks,\nMuhammad\n\n\n\n\nFrom: Andres Freund <andres@anarazel.de>\nSent: Friday, October 28, 2022 7:54 PM\nTo: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; Thomas Munro <thomas.munro@gmail.com>; Melanie Plageman <melanieplageman@gmail.com>\nCc: Yura Sokolov <y.sokolov@postgrespro.ru>; Robert Haas <robertmhaas@gmail.com>\nSubject: refactoring relation extension and BufferAlloc(), faster COPY\n \n\n\nHi,\n\nI'm working to extract independently useful bits from my AIO work, to reduce\nthe size of that patchset. This is one of those pieces.\n\nIn workloads that extend relations a lot, we end up being extremely contended\non the relation extension lock. We've attempted to address that to some degree\nby using batching, which helps, but only so much.\n\nThe fundamental issue, in my opinion, is that we do *way* too much while\nholding the relation extension lock. We acquire a victim buffer, if that\nbuffer is dirty, we potentially flush the WAL, then write out that\nbuffer. Then we zero out the buffer contents. Call smgrextend().\n\nMost of that work does not actually need to happen while holding the relation\nextension lock. As far as I can tell, the minimum that needs to be covered by\nthe extension lock is the following:\n\n1) call smgrnblocks()\n2) insert buffer[s] into the buffer mapping table at the location returned by\n smgrnblocks\n3) mark buffer[s] as IO_IN_PROGRESS\n\n\n1) obviously has to happen with the relation extension lock held because\notherwise we might miss another relation extension. 2+3) need to happen with\nthe lock held, because otherwise another backend not doing an extension could\nread the block before we're done extending, dirty it, write it out, and then\nhave it overwritten by the extending backend.\n\n\nThe reason we currently do so much work while holding the relation extension\nlock is that bufmgr.c does not know about the relation lock and that relation\nextension happens entirely within ReadBuffer* - there's no way to use a\nnarrower scope for the lock.\n\n\nMy fix for that is to add a dedicated function for extending relations, that\ncan acquire the extension lock if necessary (callers can tell it to skip that,\ne.g., when initially creating an init fork). This routine is called by\nReadBuffer_common() when P_NEW is passed in, to provide backward\ncompatibility.\n\n\nTo be able to acquire victim buffers outside of the extension lock, victim\nbuffers are now acquired separately from inserting the new buffer mapping\nentry. Victim buffer are pinned, cleaned, removed from the buffer mapping\ntable and marked invalid. Because they are pinned, clock sweeps in other\nbackends won't return them. This is done in a new function,\n[Local]BufferAlloc().\n\nThis is similar to Yuri's patch at [0], but not that similar to earlier or\nlater approaches in that thread. I don't really understand why that thread\nwent on to ever more complicated approaches, when the basic approach shows\nplenty gains, with no issues around the number of buffer mapping entries that\ncan exist etc.\n\n\n\nOther interesting bits I found:\n\na) For workloads that [mostly] fit into s_b, the smgwrite() that BufferAlloc()\n does, nearly doubles the amount of writes. First the kernel ends up writing\n out all the zeroed out buffers after a while, then when we write out the\n actual buffer contents.\n\n The best fix for that seems to be to optionally use posix_fallocate() to\n reserve space, without dirtying pages in the kernel page cache. However, it\n looks like that's only beneficial when extending by multiple pages at once,\n because it ends up causing one filesystem-journal entry for each extension\n on at least some filesystems.\n\n I added 'smgrzeroextend()' that can extend by multiple blocks, without the\n caller providing a buffer to write out. When extending by 8 or more blocks,\n posix_fallocate() is used if available, otherwise pg_pwritev_with_retry() is\n used to extend the file.\n\n\nb) I found that is quite beneficial to bulk-extend the relation with\n smgrextend() even without concurrency. The reason for that is the primarily\n the aforementioned dirty buffers that our current extension method causes.\n\n One bit that stumped me for quite a while is to know how much to extend the\n relation by. RelationGetBufferForTuple() drives the decision whether / how\n much to bulk extend purely on the contention on the extension lock, which\n obviously does not work for non-concurrent workloads.\n\n After quite a while I figured out that we actually have good information on\n how much to extend by, at least for COPY /\n heap_multi_insert(). heap_multi_insert() can compute how much space is\n needed to store all tuples, and pass that on to\n RelationGetBufferForTuple().\n\n For that to be accurate we need to recompute that number whenever we use an\n already partially filled page. That's not great, but doesn't appear to be a\n measurable overhead.\n\n\nc) Contention on the FSM and the pages returned by it is a serious bottleneck\n after a) and b).\n\n The biggest issue is that the current bulk insertion logic in hio.c enters\n all but one of the new pages into the freespacemap. That will immediately\n cause all the other backends to contend on the first few pages returned the\n FSM, and cause contention on the FSM pages itself.\n\n I've, partially, addressed that by using the information about the required\n number of pages from b). Whether we bulk insert or not, the number of pages\n we know we're going to need for one heap_multi_insert() don't need to be\n added to the FSM - we're going to use them anyway.\n\n I've stashed the number of free blocks in the BulkInsertState for now, but\n I'm not convinced that that's the right place.\n\n If I revert just this part, the \"concurrent COPY into unlogged table\"\n benchmark goes from ~240 tps to ~190 tps.\n\n\n Even after that change the FSM is a major bottleneck. Below I included\n benchmarks showing this by just removing the use of the FSM, but I haven't\n done anything further about it. The contention seems to be both from\n updating the FSM, as well as thundering-herd like symptoms from accessing\n the FSM.\n\n The update part could likely be addressed to some degree with a batch\n update operation updating the state for multiple pages.\n\n The access part could perhaps be addressed by adding an operation that gets\n a page and immediately marks it as fully used, so other backends won't also\n try to access it.\n\n\n\nd) doing\n /* new buffers are zero-filled */\n MemSet((char *) bufBlock, 0, BLCKSZ);\n\n under the extension lock is surprisingly expensive on my two socket\n workstation (but much less noticable on my laptop).\n\n If I move the MemSet back under the extension lock, the \"concurrent COPY\n into unlogged table\" benchmark goes from ~240 tps to ~200 tps.\n\n\ne) When running a few benchmarks for this email, I noticed that there was a\n sharp performance dropoff for the patched code for a pgbench -S -s100 on a\n database with 1GB s_b, start between 512 and 1024 clients. This started with\n the patch only acquiring one buffer partition lock at a time. Lots of\n debugging ensued, resulting in [3].\n\n The problem isn't actually related to the change, it just makes it more\n visible, because the \"lock chains\" between two partitions reduce the\n average length of the wait queues substantially, by distribution them\n between more partitions. [3] has a reproducer that's entirely independent\n of this patchset.\n\n\n\n\nBulk extension acquires a number of victim buffers, acquires the extension\nlock, inserts the buffers into the buffer mapping table and marks them as\nio-in-progress, calls smgrextend and releases the extension lock. After that\nbuffer[s] are locked (depending on mode and an argument indicating the number\nof blocks to be locked), and TerminateBufferIO() is called.\n\nThis requires two new pieces of infrastructure:\n\nFirst, pinning multiple buffers opens up the obvious danger that we might run\nof non-pinned buffers. I added LimitAdditional[Local]Pins() that allows each\nbackend to pin a proportional share of buffers (although always allowing one,\nas we do today).\n\nSecond, having multiple IOs in progress at the same time isn't possible with\nthe InProgressBuf mechanism. I added a ResourceOwnerRememberBufferIO() etc to\ndeal with that instead. I like that this ends up removing a lot of\nAbortBufferIO() calls from the loops of various aux processes (now released\ninside ReleaseAuxProcessResources()).\n\nIn very extreme workloads (single backend doing a pgbench -S -s 100 against a\ns_b=64MB database) the memory allocations triggered by StartBufferIO() are\n*just about* visible, not sure if that's worth worrying about - we do such\nallocations for the much more common pinning of buffers as well.\n\n\nThe new [Bulk]ExtendSharedRelationBuffered() currently have both a Relation\nand a SMgrRelation argument, requiring at least one of them to be set. The\nreason for that is on the one hand that LockRelationForExtension() requires a\nrelation and on the other hand, redo routines typically don't have a Relation\naround (recovery doesn't require an extension lock). That's not pretty, but\nseems a tad better than the ReadBufferExtended() vs\nReadBufferWithoutRelcache() mess.\n\n\n\nI've done a fair bit of benchmarking of this patchset. For COPY it comes out\nahead everywhere. It's possible that there's a very small regression for\nextremly IO miss heavy workloads, more below.\n\n\nserver \"base\" configuration:\n\nmax_wal_size=150GB\nshared_buffers=24GB\nhuge_pages=on\nautovacuum=0\nbackend_flush_after=2MB\nmax_connections=5000\nwal_buffers=128MB\nwal_segment_size=1GB\n\nbenchmark: pgbench running COPY into a single table. pgbench -t is set\naccording to the client count, so that the same amount of data is inserted.\nThis is done oth using small files ([1], ringbuffer not effective, no dirty\ndata to write out within the benchmark window) and a bit larger files ([2],\nlots of data to write out due to ringbuffer).\n\nTo make it a fair comparison HEAD includes the lwlock-waitqueue fix as well.\n\ns_b=24GB\n\ntest: unlogged_small_files, format: text, files: 1024, 9015MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 58.63 207 50.22 242 54.35 224\n2 32.67 372 25.82 472 27.30 446\n4 22.53 540 13.30 916 14.33 851\n8 15.14 804 7.43 1640 7.48 1632\n16 14.69 829 4.79 2544 4.50 2718\n32 15.28 797 4.41 2763 3.32 3710\n64 15.34 794 5.22 2334 3.06 4061\n128 15.49 786 4.97 2452 3.13 3926\n256 15.85 768 5.02 2427 3.26 3769\n512 16.02 760 5.29 2303 3.54 3471\n\ntest: logged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 68.18 178 59.41 205 63.43 192\n2 39.71 306 33.10 368 34.99 348\n4 27.26 446 19.75 617 20.09 607\n8 18.84 646 12.86 947 12.68 962\n16 15.96 763 9.62 1266 8.51 1436\n32 15.43 789 8.20 1486 7.77 1579\n64 16.11 756 8.91 1367 8.90 1383\n128 16.41 742 10.00 1218 9.74 1269\n256 17.33 702 11.91 1023 10.89 1136\n512 18.46 659 14.07 866 11.82 1049\n\ntest: unlogged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 63.27s 192 56.14 217 59.25 205\n2 40.17s 303 29.88 407 31.50 386\n4 27.57s 442 16.16 754 17.18 709\n8 21.26s 573 11.89 1025 11.09 1099\n16 21.25s 573 10.68 1141 10.22 1192\n32 21.00s 580 10.72 1136 10.35 1178\n64 20.64s 590 10.15 1200 9.76 1249\n128 skipped\n256 skipped\n512 skipped\n\ntest: logged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 71.89s 169 65.57 217 69.09 69.09\n2 47.36s 257 36.22 407 38.71 38.71\n4 33.10s 368 21.76 754 22.78 22.78\n8 26.62s 457 15.89 1025 15.30 15.30\n16 24.89s 489 17.08 1141 15.20 15.20\n32 25.15s 484 17.41 1136 16.14 16.14\n64 26.11s 466 17.89 1200 16.76 16.76\n128 skipped\n256 skipped\n512 skipped\n\n\nJust to see how far it can be pushed, with binary format we can now get to\nnearly 6GB/s into a table when disabling the FSM - note the 2x difference\nbetween patch and patch+no-fsm at 32 clients.\n\ntest: unlogged_small_files, format: binary, files: 1024, 9508MB total\n seconds tbl-MBs seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch no_fsm no_fsm\n1 34.14 357 28.04 434 29.46 413\n2 22.67 537 14.42 845 14.75 826\n4 16.63 732 7.62 1599 7.69 1587\n8 13.48 904 4.36 2795 4.13 2959\n16 14.37 848 3.78 3224 2.74 4493\n32 14.79 823 4.20 2902 2.07 5974\n64 14.76 825 5.03 2423 2.21 5561\n128 14.95 815 4.36 2796 2.30 5343\n256 15.18 802 4.31 2828 2.49 4935\n512 15.41 790 4.59 2656 2.84 4327\n\n\ns_b=4GB\n\ntest: unlogged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 62.55 194 54.22 224\n2 37.11 328 28.94 421\n4 25.97 469 16.42 742\n8 20.01 609 11.92 1022\n16 19.55 623 11.05 1102\n32 19.34 630 11.27 1081\n64 19.07 639 12.04 1012\n128 19.22 634 12.27 993\n256 19.34 630 12.28 992\n512 19.60 621 11.74 1038\n\ntest: logged_small_files, format: text, files: 1024, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 71.71 169 63.63 191\n2 46.93 259 36.31 335\n4 30.37 401 22.41 543\n8 22.86 533 16.90 721\n16 20.18 604 14.07 866\n32 19.64 620 13.06 933\n64 19.71 618 15.08 808\n128 19.95 610 15.47 787\n256 20.48 595 16.53 737\n512 21.56 565 16.86 722\n\ntest: unlogged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 62.65 194 55.74 218\n2 40.25 302 29.45 413\n4 27.37 445 16.26 749\n8 22.07 552 11.75 1037\n16 21.29 572 10.64 1145\n32 20.98 580 10.70 1139\n64 20.65 590 10.21 1193\n128 skipped\n256 skipped\n512 skipped\n\ntest: logged_medium_files, format: text, files: 64, 9018MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 71.72 169 65.12 187\n2 46.46 262 35.74 341\n4 32.61 373 21.60 564\n8 26.69 456 16.30 747\n16 25.31 481 17.00 716\n32 24.96 488 17.47 697\n64 26.05 467 17.90 680\n128 skipped\n256 skipped\n512 skipped\n\n\ntest: unlogged_small_files, format: binary, files: 1024, 9505MB total\n seconds tbl-MBs seconds tbl-MBs\nclients HEAD HEAD patch patch\n1 37.62 323 32.77 371\n2 28.35 429 18.89 645\n4 20.87 583 12.18 1000\n8 19.37 629 10.38 1173\n16 19.41 627 10.36 1176\n32 18.62 654 11.04 1103\n64 18.33 664 11.89 1024\n128 18.41 661 11.91 1023\n256 18.52 658 12.10 1007\n512 18.78 648 11.49 1060\n\n\nbenchmark: Run a pgbench -S workload with scale 100, so it doesn't fit into\ns_b, thereby exercising BufferAlloc()'s buffer replacement path heavily.\n\n\nThe run-to-run variance on my workstation is high for this workload (both\nbefore/after my changes). I also found that the ramp-up time at higher client\ncounts is very significant:\nprogress: 2.1 s, 5816.8 tps, lat 1.835 ms stddev 4.450, 0 failed\nprogress: 3.0 s, 666729.4 tps, lat 5.755 ms stddev 16.753, 0 failed\nprogress: 4.0 s, 899260.1 tps, lat 3.619 ms stddev 41.108, 0 failed\n...\n\nOne would need to run pgbench for impractically long to make that effect\nvanish.\n\nMy not great solution for these was to run with -T21 -P5 and use the best 5s\nas the tps.\n\n\ns_b=1GB\n tps tps\nclients master patched\n1 49541 48805\n2 85342 90010\n4 167340 168918\n8 308194 303222\n16 524294 523678\n32 649516 649100\n64 932547 937702\n128 908249 906281\n256 856496 903979\n512 764254 934702\n1024 653886 925113\n2048 569695 917262\n4096 526782 903258\n\n\ns_b=128MB:\n tps tps\nclients master patched\n1 40407 39854\n2 73180 72252\n4 143334 140860\n8 240982 245331\n16 429265 420810\n32 544593 540127\n64 706408 726678\n128 713142 718087\n256 611030 695582\n512 552751 686290\n1024 508248 666370\n2048 474108 656735\n4096 448582 633040\n\n\nAs there might be a small regression at the smallest end, I ran a more extreme\nversion of the above. Using a pipelined pgbench -S, with a single client, for\nlonger. With s_b=8MB.\n\nTo further reduce noise I pinned the server to one cpu, the client to another\nand disabled turbo mode on the CPU.\n\nmaster \"total\" tps: 61.52\nmaster \"best 5s\" tps: 61.8\npatch \"total\" tps: 61.20\npatch \"best 5s\" tps: 61.4\n\nHardly conclusive, but it does look like there's a small effect. It could be\ncode layout or such.\n\nMy guess however is that it's the resource owner for in-progress IO that I\nadded - that adds an additional allocation inside the resowner machinery. I\ncommented those out (that's obviously incorrect!) just to see whether that\nchanges anything:\n\nno-resowner \"total\" tps: 62.03\nno-resowner \"best 5s\" tps: 62.2\n\nSo it looks like indeed, it's the resowner. I am a bit surprised, because\nobviously we already use that mechanism for pins, which obviously is more\nfrequent.\n\nI'm not sure it's worth worrying about - this is a pretty absurd workload. But\nif we decide it is, I can think of a few ways to address this. E.g.:\n\n- We could preallocate an initial element inside the ResourceArray struct, so\n that a newly created resowner won't need to allocate immediately\n- We could only use resowners if there's more than one IO in progress at the\n same time - but I don't like that idea much\n- We could try to store the \"in-progress\"-ness of a buffer inside the 'bufferpin'\n resowner entry - on 64bit system there's plenty space for that. But on 32bit systems...\n\n\nThe patches here aren't fully polished (as will be evident). But they should\nbe more than good enough to discuss whether this is a sane direction.\n\nGreetings,\n\nAndres Freund\n\n[0] \nhttps://postgr.es/m/3b108afd19fa52ed20c464a69f64d545e4a14772.camel%40postgrespro.ru\n[1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n[2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n[3] \nhttps://postgr.es/m/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de",
"msg_date": "Wed, 3 May 2023 18:25:51 +0000",
"msg_from": "Muhammad Malik <muhammad.malik1@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-03 18:25:51 +0000, Muhammad Malik wrote:\n> Could you please share repro steps for running these benchmarks? I am doing performance testing in this area and want to use the same benchmarks.\n\nThe email should contain all the necessary information. What are you missing?\n\n\nc=16;psql -c 'DROP TABLE IF EXISTS copytest_0; CREATE TABLE copytest_0(data\ntext not null);' && time /srv/dev/build/m-opt/src/bin/pgbench/pgbench -n -P1\n-c$c -j$c -t$((1024/$c)) -f ~/tmp/copy.sql && psql -c 'TRUNCATE copytest_0'\n\n\n> I've done a fair bit of benchmarking of this patchset. For COPY it comes out\n> ahead everywhere. It's possible that there's a very small regression for\n> extremly IO miss heavy workloads, more below.\n>\n>\n> server \"base\" configuration:\n>\n> max_wal_size=150GB\n> shared_buffers=24GB\n> huge_pages=on\n> autovacuum=0\n> backend_flush_after=2MB\n> max_connections=5000\n> wal_buffers=128MB\n> wal_segment_size=1GB\n>\n> benchmark: pgbench running COPY into a single table. pgbench -t is set\n> according to the client count, so that the same amount of data is inserted.\n> This is done oth using small files ([1], ringbuffer not effective, no dirty\n> data to write out within the benchmark window) and a bit larger files ([2],\n> lots of data to write out due to ringbuffer).\n\nI use a script like:\n\nc=16;psql -c 'DROP TABLE IF EXISTS copytest_0; CREATE TABLE copytest_0(data text not null);' && time /srv/dev/build/m-opt/src/bin/pgbench/pgbench -n -P1 -c$c -j$c -t$((1024/$c)) -f ~/tmp/copy.sql && psql -c 'TRUNCATE copytest_0'\n\n\n> [1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n> [2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 May 2023 11:44:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "> I use a script like:\n\n> c=16;psql -c 'DROP TABLE IF EXISTS copytest_0; CREATE TABLE copytest_0(data text not null);' && time /srv/dev/build/m-opt/src/bin/pgbench/pgbench -n -P1 -c$c -j$c -t$((1024/$c)) -f ~/tmp/copy.sql && psql -c 'TRUNCATE copytest_0'\n\n> >[1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n> >[2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n\nWhen I ran this script it did not insert anything into the copytest_0 table. It only generated a single copytest_data_text.copy file of size 9.236MB.\nPlease help me understand how is this 'pgbench running COPY into a single table'. Also what are the 'seconds' and 'tbl-MBs' metrics that were reported.\n\nThank you,\nMuhammad\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n> I\n use a script like:\n\n\n\n> c=16;psql -c 'DROP TABLE IF EXISTS copytest_0; CREATE TABLE copytest_0(data text not null);' && time /srv/dev/build/m-opt/src/bin/pgbench/pgbench -n -P1 -c$c -j$c -t$((1024/$c)) -f ~/tmp/copy.sql && psql -c 'TRUNCATE copytest_0'\n\n> >[1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n> >[2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n\n\nWhen I ran\n this script it did not insert anything into the copytest_0\n table. It only generated a single copytest_data_text.copy file\n of size 9.236MB. \n\nPlease\n help me understand how is this 'pgbench running COPY into a single table'. Also what\n are the 'seconds' and 'tbl-MBs' metrics\n that were reported.\n\n\nThank\n you,\nMuhammad",
"msg_date": "Wed, 3 May 2023 19:29:46 +0000",
"msg_from": "Muhammad Malik <muhammad.malik1@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-03 19:29:46 +0000, Muhammad Malik wrote:\n> > I use a script like:\n>\n> > c=16;psql -c 'DROP TABLE IF EXISTS copytest_0; CREATE TABLE copytest_0(data text not null);' && time /srv/dev/build/m-opt/src/bin/pgbench/pgbench -n -P1 -c$c -j$c -t$((1024/$c)) -f ~/tmp/copy.sql && psql -c 'TRUNCATE copytest_0'\n>\n> > >[1] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT test);\n> > >[2] COPY (SELECT repeat(random()::text, 5) FROM generate_series(1, 6*100000)) TO '/tmp/copytest_data_text.copy' WITH (FORMAT text);\n>\n> When I ran this script it did not insert anything into the copytest_0 table. It only generated a single copytest_data_text.copy file of size 9.236MB.\n> Please help me understand how is this 'pgbench running COPY into a single table'.\n\nThat's the data generation for the file to be COPYed in. The script passed to\npgbench is just something like\n\nCOPY copytest_0 FROM '/tmp/copytest_data_text.copy';\nor\nCOPY copytest_0 FROM '/tmp/copytest_data_binary.copy';\n\n\n> Also what are the 'seconds' and 'tbl-MBs' metrics that were reported.\n\nThe total time for inserting N (1024 for the small files, 64 for the larger\nones). \"tbl-MBs\" is size of the resulting table, divided by time. I.e. a\nmeasure of throughput.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 May 2023 14:25:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: refactoring relation extension and BufferAlloc(), faster COPY"
}
] |
[
{
"msg_contents": "I'm trying to add a new index, but when I finish it, I use “ create index xxx_index on t1 using xxx(a); ”,it gives me access method \"spb\" does not exist\r\nAnd I don't know where this message is from, can you grve me its position?\r\n\r\n\r\njacktby@gmail.com\r\n\n\n\nI'm trying to add a new index, but when I finish it, I use “ create index xxx_index on t1 using xxx(a); ”,it gives me access method \"spb\" does not existAnd I don't know where this message is from, can you grve me its position?\njacktby@gmail.com",
"msg_date": "Sat, 29 Oct 2022 17:25:32 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "access method xxx does not exist"
},
{
"msg_contents": "I'm trying to add a new index, but when I finish it, I use “ create index xxx_index on t1 using xxx(a); ”,it gives me access method \"spb\" does not exist\r\nAnd I don't know where this message is from, can you grve me its position?\r\n\r\n\r\njacktby@gmail.com\r\n\nI do like this. I add oid in pg_am.dat and pg_proc.dat for my index. And I add codes in contrib and backend/access/myindex_name, is there any other places I need to add some infos? jacktby@gmail.com 发件人: jacktby@gmail.com发送时间: 2022-10-29 17:25收件人: pgsql-hackers主题: access method xxx does not exist\n\n\nI'm trying to add a new index, but when I finish it, I use “ create index xxx_index on t1 using xxx(a); ”,it gives me access method \"spb\" does not existAnd I don't know where this message is from, can you grve me its position?\njacktby@gmail.com",
"msg_date": "Sat, 29 Oct 2022 18:57:17 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?B?5Zue5aSNOiBhY2Nlc3MgbWV0aG9kIHh4eCBkb2VzIG5vdCBleGlzdA==?="
}
] |
[
{
"msg_contents": "Hello!\n\nThis is a copy of [1] moved to a separated thread for Commitfest..\n\nI discovered an interesting behaviour during installcheck runs on PG 15+ \nwhen the cluster was initialized with ICU locale provider:\n\n$ initdb --locale-provider icu --icu-locale en-US -D data &&\npg_ctl -D data -l logfile start\n\n1) The ECPG tests fail because they use the SQL_ASCII encoding [2], the \ndatabase template0 uses the ICU locale provider and SQL_ASCII is not \nsupported by ICU:\n\n$ make -C src/interfaces/ecpg/ installcheck\n...\n============== creating database \"ecpg1_regression\" ==============\nERROR: encoding \"SQL_ASCII\" is not supported with ICU provider\nERROR: database \"ecpg1_regression\" does not exist\ncommand failed: \"/home/marina/postgresql/master/my/inst/bin/psql\" -X -c \n\"CREATE DATABASE \\\"ecpg1_regression\\\" TEMPLATE=template0 \nENCODING='SQL_ASCII'\" -c \"ALTER DATABASE \\\"ecpg1_regression\\\" SET \nlc_messages TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_monetary \nTO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_numeric TO 'C';ALTER \nDATABASE \\\"ecpg1_regression\\\" SET lc_time TO 'C';ALTER DATABASE \n\\\"ecpg1_regression\\\" SET bytea_output TO 'hex';ALTER DATABASE \n\\\"ecpg1_regression\\\" SET timezone_abbreviations TO 'Default';\" \n\"postgres\"\n\n2) The option --no-locale in pg_regress is described as \"use C locale\" \n[3]. But in this case the created databases actually use the ICU locale \nprovider with the ICU cluster locale from template0 (see \ndiff_check_backend_used_provider.txt):\n\n$ make NO_LOCALE=1 installcheck\n\nIn regression.diffs:\n\ndiff -U3 \n/home/marina/postgresql/master/src/test/regress/expected/test_setup.out \n/home/marina/postgresql/master/src/test/regress/results/test_setup.out\n--- \n/home/marina/postgresql/master/src/test/regress/expected/test_setup.out \n 2022-09-27 05:31:27.674628815 +0300\n+++ \n/home/marina/postgresql/master/src/test/regress/results/test_setup.out \n 2022-10-21 15:09:31.232992885 +0300\n@@ -143,6 +143,798 @@\n \\set filename :abs_srcdir '/data/person.data'\n COPY person FROM :'filename';\n VACUUM ANALYZE person;\n+NOTICE: varstrfastcmp_locale sss->collate_c 0 sss->locale 0xefacd0\n+NOTICE: varstrfastcmp_locale sss->locale->provider i\n+NOTICE: varstrfastcmp_locale sss->locale->info.icu.locale en-US\n...\n\nThe patch \nv1-0001-Fix-database-creation-during-installchecks-for-IC.patch fixes \nboth issues for me.\n\n[1] \nhttps://www.postgresql.org/message-id/727b5d5160f845dcf5e0818e625a6e56%40postgrespro.ru\n[2] \nhttps://github.com/postgres/postgres/blob/ce20f8b9f4354b46b40fd6ebf7ce5c37d08747e0/src/interfaces/ecpg/test/Makefile#L18\n[3] \nhttps://github.com/postgres/postgres/blob/ce20f8b9f4354b46b40fd6ebf7ce5c37d08747e0/src/test/regress/pg_regress.c#L1992\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 29 Oct 2022 12:54:43 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Fix database creation during installchecks for ICU cluster"
},
{
"msg_contents": "Hi,\n\nThanks for the patch!\n\n\nOn 10/29/22 12:54, Marina Polyakova wrote:\n>\n> 1) The ECPG tests fail because they use the SQL_ASCII encoding [2], \n> the database template0 uses the ICU locale provider and SQL_ASCII is \n> not supported by ICU:\n>\n> $ make -C src/interfaces/ecpg/ installcheck\n> ...\n> ============== creating database \"ecpg1_regression\" ==============\n> ERROR: encoding \"SQL_ASCII\" is not supported with ICU provider\n> ERROR: database \"ecpg1_regression\" does not exist\n> command failed: \"/home/marina/postgresql/master/my/inst/bin/psql\" -X \n> -c \"CREATE DATABASE \\\"ecpg1_regression\\\" TEMPLATE=template0 \n> ENCODING='SQL_ASCII'\" -c \"ALTER DATABASE \\\"ecpg1_regression\\\" SET \n> lc_messages TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_monetary \n> TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_numeric TO 'C';ALTER \n> DATABASE \\\"ecpg1_regression\\\" SET lc_time TO 'C';ALTER DATABASE \n> \\\"ecpg1_regression\\\" SET bytea_output TO 'hex';ALTER DATABASE \n> \\\"ecpg1_regression\\\" SET timezone_abbreviations TO 'Default';\" \"postgres\"\n>\n\nI can confirm that same error happens on my end and your patch fixes the \nissue. But, do ECPG tests really require SQL_ASCII encoding? I removed \nECPG tests' encoding line [1], rebuilt it and 'make -C \nsrc/interfaces/ecpg/ installcheck' passed without applying your patch.\n\n\n>\n> 2) The option --no-locale in pg_regress is described as \"use C locale\" \n> [3]. But in this case the created databases actually use the ICU \n> locale provider with the ICU cluster locale from template0 (see \n> diff_check_backend_used_provider.txt):\n>\n> $ make NO_LOCALE=1 installcheck\n\n\nThis works on my end without applying your patch. Commands I used are:\n\n$ ./configure --with-icu --prefix=$PWD/build_dir\n$ make && make install && export PATH=$PWD/build_dir/bin:$PATH\n$ initdb --locale-provider icu --icu-locale en-US -D data && pg_ctl -D \ndata -l logfile start\n$ make NO_LOCALE=1 installcheck\n\n[1] \nhttps://github.com/postgres/postgres/blob/ce20f8b9f4354b46b40fd6ebf7ce5c37d08747e0/src/interfaces/ecpg/test/Makefile#L18\n\n\nRegards,\nNazir Bilal Yavuz\n\n\n\n",
"msg_date": "Tue, 29 Nov 2022 17:54:33 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix database creation during installchecks for ICU cluster"
},
{
"msg_contents": "On Tue, 29 Nov 2022 at 20:24, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for the patch!\n>\n>\n> On 10/29/22 12:54, Marina Polyakova wrote:\n> >\n> > 1) The ECPG tests fail because they use the SQL_ASCII encoding [2],\n> > the database template0 uses the ICU locale provider and SQL_ASCII is\n> > not supported by ICU:\n> >\n> > $ make -C src/interfaces/ecpg/ installcheck\n> > ...\n> > ============== creating database \"ecpg1_regression\" ==============\n> > ERROR: encoding \"SQL_ASCII\" is not supported with ICU provider\n> > ERROR: database \"ecpg1_regression\" does not exist\n> > command failed: \"/home/marina/postgresql/master/my/inst/bin/psql\" -X\n> > -c \"CREATE DATABASE \\\"ecpg1_regression\\\" TEMPLATE=template0\n> > ENCODING='SQL_ASCII'\" -c \"ALTER DATABASE \\\"ecpg1_regression\\\" SET\n> > lc_messages TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_monetary\n> > TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_numeric TO 'C';ALTER\n> > DATABASE \\\"ecpg1_regression\\\" SET lc_time TO 'C';ALTER DATABASE\n> > \\\"ecpg1_regression\\\" SET bytea_output TO 'hex';ALTER DATABASE\n> > \\\"ecpg1_regression\\\" SET timezone_abbreviations TO 'Default';\" \"postgres\"\n> >\n>\n> I can confirm that same error happens on my end and your patch fixes the\n> issue. But, do ECPG tests really require SQL_ASCII encoding? I removed\n> ECPG tests' encoding line [1], rebuilt it and 'make -C\n> src/interfaces/ecpg/ installcheck' passed without applying your patch.\n>\n>\n> >\n> > 2) The option --no-locale in pg_regress is described as \"use C locale\"\n> > [3]. But in this case the created databases actually use the ICU\n> > locale provider with the ICU cluster locale from template0 (see\n> > diff_check_backend_used_provider.txt):\n> >\n> > $ make NO_LOCALE=1 installcheck\n>\n>\n> This works on my end without applying your patch. Commands I used are:\n>\n> $ ./configure --with-icu --prefix=$PWD/build_dir\n> $ make && make install && export PATH=$PWD/build_dir/bin:$PATH\n> $ initdb --locale-provider icu --icu-locale en-US -D data && pg_ctl -D\n> data -l logfile start\n> $ make NO_LOCALE=1 installcheck\n\nHi Marina Polyakova,\n\nSince it is working without your patch, Is this patch required for any\nother scenarios?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 17 Jan 2023 17:17:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix database creation during installchecks for ICU cluster"
},
{
"msg_contents": "On Tue, 17 Jan 2023 at 17:17, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 29 Nov 2022 at 20:24, Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Thanks for the patch!\n> >\n> >\n> > On 10/29/22 12:54, Marina Polyakova wrote:\n> > >\n> > > 1) The ECPG tests fail because they use the SQL_ASCII encoding [2],\n> > > the database template0 uses the ICU locale provider and SQL_ASCII is\n> > > not supported by ICU:\n> > >\n> > > $ make -C src/interfaces/ecpg/ installcheck\n> > > ...\n> > > ============== creating database \"ecpg1_regression\" ==============\n> > > ERROR: encoding \"SQL_ASCII\" is not supported with ICU provider\n> > > ERROR: database \"ecpg1_regression\" does not exist\n> > > command failed: \"/home/marina/postgresql/master/my/inst/bin/psql\" -X\n> > > -c \"CREATE DATABASE \\\"ecpg1_regression\\\" TEMPLATE=template0\n> > > ENCODING='SQL_ASCII'\" -c \"ALTER DATABASE \\\"ecpg1_regression\\\" SET\n> > > lc_messages TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_monetary\n> > > TO 'C';ALTER DATABASE \\\"ecpg1_regression\\\" SET lc_numeric TO 'C';ALTER\n> > > DATABASE \\\"ecpg1_regression\\\" SET lc_time TO 'C';ALTER DATABASE\n> > > \\\"ecpg1_regression\\\" SET bytea_output TO 'hex';ALTER DATABASE\n> > > \\\"ecpg1_regression\\\" SET timezone_abbreviations TO 'Default';\" \"postgres\"\n> > >\n> >\n> > I can confirm that same error happens on my end and your patch fixes the\n> > issue. But, do ECPG tests really require SQL_ASCII encoding? I removed\n> > ECPG tests' encoding line [1], rebuilt it and 'make -C\n> > src/interfaces/ecpg/ installcheck' passed without applying your patch.\n> >\n> >\n> > >\n> > > 2) The option --no-locale in pg_regress is described as \"use C locale\"\n> > > [3]. But in this case the created databases actually use the ICU\n> > > locale provider with the ICU cluster locale from template0 (see\n> > > diff_check_backend_used_provider.txt):\n> > >\n> > > $ make NO_LOCALE=1 installcheck\n> >\n> >\n> > This works on my end without applying your patch. Commands I used are:\n> >\n> > $ ./configure --with-icu --prefix=$PWD/build_dir\n> > $ make && make install && export PATH=$PWD/build_dir/bin:$PATH\n> > $ initdb --locale-provider icu --icu-locale en-US -D data && pg_ctl -D\n> > data -l logfile start\n> > $ make NO_LOCALE=1 installcheck\n>\n> Hi Marina Polyakova,\n>\n> Since it is working without your patch, Is this patch required for any\n> other scenarios?\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:21:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix database creation during installchecks for ICU cluster"
}
] |
[
{
"msg_contents": "Hello!\n\nThis is the last proposed patch on this subject [1] moved to a separate \nthread for Commitfest..\n\nIt looks like that the current order of checking ICU options in initdb \nand create database in PG 15+ is not user-friendly. Examples:\n\n1. initdb reports a missing ICU locale although it may already report \nthat the selected encoding cannot be used:\n\n$ initdb --encoding sql-ascii --locale-provider icu hoge\n...\ninitdb: error: ICU locale must be specified\n\n$ initdb --encoding sql-ascii --locale-provider icu --icu-locale en-US \nhoge\n...\ninitdb: error: encoding mismatch\ninitdb: detail: The encoding you selected (SQL_ASCII) is not supported \nwith the ICU provider.\ninitdb: hint: Rerun initdb and either do not specify an encoding \nexplicitly, or choose a matching combination.\n\n2. initdb/create database report problems with the ICU locale/encoding \nalthough they may already report that ICU is not supported in this \nbuild:\n\n2.1.\n\n$ initdb --locale-provider icu hoge\n...\ninitdb: error: ICU locale must be specified\n\n$ initdb --locale-provider icu --icu-locale en-US hoge\n...\ninitdb: error: ICU is not supported in this build\n\n$ createdb --locale-provider icu hoge\ncreatedb: error: database creation failed: ERROR: ICU locale must be \nspecified\n\n$ createdb --locale-provider icu --icu-locale en-US hoge\ncreatedb: error: database creation failed: ERROR: ICU is not supported \nin this build\n\n2.2.\n\n$ createdb --locale-provider icu --icu-locale en-US --encoding sql-ascii \nhoge\ncreatedb: error: database creation failed: ERROR: encoding \"SQL_ASCII\" \nis not supported with ICU provider\n\n$ createdb --locale-provider icu --icu-locale en-US --encoding utf8 hoge\ncreatedb: error: database creation failed: ERROR: ICU is not supported \nin this build\n\nThe patch \nv1-0001-Fix-order-of-checking-ICU-options-in-initdb-and-c.patch fixes \nthis:\n\n1.\n\n$ initdb --encoding sql-ascii --locale-provider icu hoge\n...\ninitdb: error: encoding mismatch\ninitdb: detail: The encoding you selected (SQL_ASCII) is not supported \nwith the ICU provider.\ninitdb: hint: Rerun initdb and either do not specify an encoding \nexplicitly, or choose a matching combination.\n\n2.1.\n\n$ initdb --locale-provider icu hoge\n...\ninitdb: error: ICU is not supported in this build\n\n$ createdb --locale-provider icu hoge\ncreatedb: error: database creation failed: ERROR: ICU is not supported \nin this build\n\n2.2.\n\n$ createdb --locale-provider icu --icu-locale en-US --encoding sql-ascii \nhoge\ncreatedb: error: database creation failed: ERROR: ICU is not supported \nin this build\n\nA side effect of the proposed patch in initdb is that if ICU locale is \nmissed (or ICU is not supported in this build), the provider, locales \nand encoding are reported before the error message:\n\n$ initdb --locale-provider icu hoge\nThe files belonging to this database system will be owned by user \n\"marina\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n LC_COLLATE: en_US.UTF-8\n LC_CTYPE: en_US.UTF-8\n LC_MESSAGES: en_US.UTF-8\n LC_MONETARY: ru_RU.UTF-8\n LC_NUMERIC: ru_RU.UTF-8\n LC_TIME: ru_RU.UTF-8\nThe default database encoding has been set to \"UTF8\".\ninitdb: error: ICU locale must be specified\n\n$ initdb --locale-provider icu hoge\nThe files belonging to this database system will be owned by user \n\"marina\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n LC_COLLATE: en_US.UTF-8\n LC_CTYPE: en_US.UTF-8\n LC_MESSAGES: en_US.UTF-8\n LC_MONETARY: ru_RU.UTF-8\n LC_NUMERIC: ru_RU.UTF-8\n LC_TIME: ru_RU.UTF-8\nThe default database encoding has been set to \"UTF8\".\ninitdb: error: ICU is not supported in this build\n\nI was thinking about another master-only version of the patch that first \nchecks everything for provider, locales and encoding but IMO it is worse \n[2]..\n\n[1] \nhttps://www.postgresql.org/message-id/e94aca035bf0b92fac42d204ad385552%40postgrespro.ru\n[2] \nhttps://www.postgresql.org/message-id/79f410460c4fc9534000785adb8bf39a%40postgrespro.ru\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 29 Oct 2022 14:33:45 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "On 2022-10-29 14:33, Marina Polyakova wrote:\n> Hello!\n> \n> This is the last proposed patch on this subject [1] moved to a\n> separate thread for Commitfest..\n\nAlso added a patch to export with_icu when running src/bin/scripts tests \n[1].\nThe problem can be reproduced as\n\n$ meson setup <source dir> && ninja && meson test --print-errorlogs \n--suite setup --suite scripts\n\n[1] https://cirrus-ci.com/task/4825664661487616\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 29 Oct 2022 16:09:20 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "\nHello Marina.\n\nI just reviewed your patch.\n\nIt applied cleanly at my current master (commit d6a3dbe14f98d867b2fc3faeb99d2d3c2a48ca67).\n\nAlso, it worked as described in email. Since it's a clarification in an \nerror message, I think the documentation is fine.\n\nI played a bit with \"make check\", creating a database in my native \nlanguage (pt_BR), testing with some data and everything worked as \nexpected.\n\n--\nJose Arthur Benetasso Villanova\n\n\n",
"msg_date": "Sat, 12 Nov 2022 16:43:33 -0300 (-03)",
"msg_from": "Jose Arthur Benetasso Villanova <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create\n database"
},
{
"msg_contents": "On 2022-11-12 22:43, Jose Arthur Benetasso Villanova wrote:\n> Hello Marina.\n> \n> I just reviewed your patch.\n> \n> It applied cleanly at my current master (commit\n> d6a3dbe14f98d867b2fc3faeb99d2d3c2a48ca67).\n> \n> Also, it worked as described in email. Since it's a clarification in\n> an error message, I think the documentation is fine.\n> \n> I played a bit with \"make check\", creating a database in my native\n> language (pt_BR), testing with some data and everything worked as\n> expected.\n\nHello!\n\nThank you!\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 14 Nov 2022 13:56:06 +0300",
"msg_from": "Marina Polyakova <m.polyakova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "On 29.10.22 15:09, Marina Polyakova wrote:\n> On 2022-10-29 14:33, Marina Polyakova wrote:\n>> Hello!\n>>\n>> This is the last proposed patch on this subject [1] moved to a\n>> separate thread for Commitfest..\n> \n> Also added a patch to export with_icu when running src/bin/scripts tests \n> [1].\n\nI have committed the meson change.\n\n\n\n",
"msg_date": "Thu, 17 Nov 2022 07:54:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "On 29.10.22 13:33, Marina Polyakova wrote:\n> 2. initdb/create database report problems with the ICU locale/encoding \n> although they may already report that ICU is not supported in this build:\n> \n> 2.1.\n> \n> $ initdb --locale-provider icu hoge\n> ...\n> initdb: error: ICU locale must be specified\n> \n> $ initdb --locale-provider icu --icu-locale en-US hoge\n> ...\n> initdb: error: ICU is not supported in this build\n> \n> $ createdb --locale-provider icu hoge\n> createdb: error: database creation failed: ERROR: ICU locale must be \n> specified\n> \n> $ createdb --locale-provider icu --icu-locale en-US hoge\n> createdb: error: database creation failed: ERROR: ICU is not supported \n> in this build\n\nI'm not in favor of changing this. The existing code intentionally \ntries to centralize the \"ICU is not supported in this build\" knowledge \nin few places. Your patch tries to make this check early, but in the \nprocess adds more places where ICU support needs to be checked \nexplicitly. This increases the code size and also creates a future \nburden to maintain that level of checking. I think building without ICU \nshould be considered a marginal configuration at this point, so we don't \nneed to go out of our way to create a perfect user experience for this \nconfiguration, as long as we check somewhere in the end.\n\n\n\n",
"msg_date": "Thu, 17 Nov 2022 07:58:03 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "чт, 17 нояб. 2022 г. в 09:54, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com>:\n>\n> On 29.10.22 15:09, Marina Polyakova wrote:\n> > On 2022-10-29 14:33, Marina Polyakova wrote:\n> >> Hello!\n> >>\n> >> This is the last proposed patch on this subject [1] moved to a\n> >> separate thread for Commitfest..\n> >\n> > Also added a patch to export with_icu when running src/bin/scripts tests\n> > [1].\n>\n> I have committed the meson change.\n\nThank you!\n\n(Sorry, I'm having problems sending emails from corporate email :( )\n\n--\nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 19 Nov 2022 14:42:08 +0300",
"msg_from": "=?UTF-8?B?0JzQsNGA0LjQvdCwINCf0L7Qu9GP0LrQvtCy0LA=?=\n <polyakova.marina69@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "чт, 17 нояб. 2022 г. в 09:58, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com>:\n>\n> On 29.10.22 13:33, Marina Polyakova wrote:\n> > 2. initdb/create database report problems with the ICU locale/encoding\n> > although they may already report that ICU is not supported in this build:\n> >\n> > 2.1.\n> >\n> > $ initdb --locale-provider icu hoge\n> > ...\n> > initdb: error: ICU locale must be specified\n> >\n> > $ initdb --locale-provider icu --icu-locale en-US hoge\n> > ...\n> > initdb: error: ICU is not supported in this build\n> >\n> > $ createdb --locale-provider icu hoge\n> > createdb: error: database creation failed: ERROR: ICU locale must be\n> > specified\n> >\n> > $ createdb --locale-provider icu --icu-locale en-US hoge\n> > createdb: error: database creation failed: ERROR: ICU is not supported\n> > in this build\n>\n> I'm not in favor of changing this. The existing code intentionally\n> tries to centralize the \"ICU is not supported in this build\" knowledge\n> in few places. Your patch tries to make this check early, but in the\n> process adds more places where ICU support needs to be checked\n> explicitly. This increases the code size and also creates a future\n> burden to maintain that level of checking. I think building without ICU\n> should be considered a marginal configuration at this point, so we don't\n> need to go out of our way to create a perfect user experience for this\n> configuration, as long as we check somewhere in the end.\n\nMaybe this should be written in the documentation [1] or --with-icu\nshould be used by default? As a developer I usually check something\nwith the simplest configure run to make sure other options do not\naffect the checked behaviour. And some other developers in our company\nalso use simple configure runs, without --with-icu etc.\n\n[1] https://www.postgresql.org/docs/15/install-procedure.html\n\n--\nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 19 Nov 2022 15:12:29 +0300",
"msg_from": "=?UTF-8?B?0JzQsNGA0LjQvdCwINCf0L7Qu9GP0LrQvtCy0LA=?=\n <polyakova.marina69@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "On 19.11.22 13:12, Марина Полякова wrote:\n>> I'm not in favor of changing this. The existing code intentionally\n>> tries to centralize the \"ICU is not supported in this build\" knowledge\n>> in few places. Your patch tries to make this check early, but in the\n>> process adds more places where ICU support needs to be checked\n>> explicitly. This increases the code size and also creates a future\n>> burden to maintain that level of checking. I think building without ICU\n>> should be considered a marginal configuration at this point, so we don't\n>> need to go out of our way to create a perfect user experience for this\n>> configuration, as long as we check somewhere in the end.\n> Maybe this should be written in the documentation [1] or --with-icu\n> should be used by default? As a developer I usually check something\n> with the simplest configure run to make sure other options do not\n> affect the checked behaviour. And some other developers in our company\n> also use simple configure runs, without --with-icu etc.\n\nWell, this isn't a hard rule, just my opinion and where I see the world \nmoving. It's similar to --with-openssl and --with-lz4 etc.\n\n\n",
"msg_date": "Sat, 19 Nov 2022 13:51:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "сб, 19 нояб. 2022 г. в 15:51, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com>:\n>\n> On 19.11.22 13:12, Марина Полякова wrote:\n> >> I'm not in favor of changing this. The existing code intentionally\n> >> tries to centralize the \"ICU is not supported in this build\" knowledge\n> >> in few places. Your patch tries to make this check early, but in the\n> >> process adds more places where ICU support needs to be checked\n> >> explicitly. This increases the code size and also creates a future\n> >> burden to maintain that level of checking. I think building without ICU\n> >> should be considered a marginal configuration at this point, so we don't\n> >> need to go out of our way to create a perfect user experience for this\n> >> configuration, as long as we check somewhere in the end.\n> > Maybe this should be written in the documentation [1] or --with-icu\n> > should be used by default? As a developer I usually check something\n> > with the simplest configure run to make sure other options do not\n> > affect the checked behaviour. And some other developers in our company\n> > also use simple configure runs, without --with-icu etc.\n>\n> Well, this isn't a hard rule, just my opinion and where I see the world\n> moving. It's similar to --with-openssl and --with-lz4 etc.\n\nHere is another set of proposed patches:\n\nv2-0001-Fix-encoding-check-in-initdb-when-the-option-icu-.patch\nTarget: PG 15+\nFix encoding check in initdb when the option --icu-locale is not used:\n\n$ initdb --encoding sql-ascii --locale-provider icu hoge\n...\ninitdb: error: encoding mismatch\ninitdb: detail: The encoding you selected (SQL_ASCII) is not supported\nwith the ICU provider.\ninitdb: hint: Rerun initdb and either do not specify an encoding\nexplicitly, or choose a matching combination.\n\nAs with the previous version of this fix a side effect is that if ICU\nlocale is missed (or ICU is not supported in this build), the\nprovider, locales and encoding are reported before the error message:\n\n$ initdb --locale-provider icu hoge\nThe files belonging to this database system will be owned by user \"marina\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n LC_COLLATE: en_US.UTF-8\n LC_CTYPE: en_US.UTF-8\n LC_MESSAGES: en_US.UTF-8\n LC_MONETARY: ru_RU.UTF-8\n LC_NUMERIC: ru_RU.UTF-8\n LC_TIME: ru_RU.UTF-8\nThe default database encoding has been set to \"UTF8\".\ninitdb: error: ICU locale must be specified\n\n$ initdb --locale-provider icu --icu-locale en hoge\nThe files belonging to this database system will be owned by user \"marina\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n ICU locale: en\n LC_COLLATE: en_US.UTF-8\n LC_CTYPE: en_US.UTF-8\n LC_MESSAGES: en_US.UTF-8\n LC_MONETARY: ru_RU.UTF-8\n LC_NUMERIC: ru_RU.UTF-8\n LC_TIME: ru_RU.UTF-8\nThe default database encoding has been set to \"UTF8\".\ninitdb: error: ICU is not supported in this build\n\nv2-0002-doc-building-without-ICU-is-not-recommended.patch\nTarget: PG 15+\nFix the documentation that --without-icu is a marginal configuration.\n\nv2-0003-Build-with-ICU-by-default.patch\nTarget: PG 16+\nBuild with ICU by default as already done for readline and zlib libraries.\n\n--\nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 19 Nov 2022 22:36:12 +0300",
"msg_from": "=?UTF-8?B?0JzQsNGA0LjQvdCwINCf0L7Qu9GP0LrQvtCy0LA=?=\n <polyakova.marina69@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "On 19.11.22 20:36, Марина Полякова wrote:\n> Here is another set of proposed patches:\n> \n> v2-0001-Fix-encoding-check-in-initdb-when-the-option-icu-.patch\n> Target: PG 15+\n> Fix encoding check in initdb when the option --icu-locale is not used:\n\nI'm having a hard time figuring out from your examples what you are \ntrying to change. Which one is the \"before\" example and which one is \nthe \"after\", and which aspect specifically is the issue and what \nspecifically is being addressed? I tried out the examples in the \ncurrent code and didn't find anything obviously wrong in the behavior.\n\nI'm concerned that the initdb.c changes are a bit unprincipled. They \njust move code around to achieve some behavior without keeping the \noverall structure in mind. For example, check_icu_locale_encoding() \nintentionally had the same API as check_locale_encoding(), but now \nthat's being changed. And setlocales() did all the locale parameter \nvalidity checking, but now part of that is being moved around. I'm \nafraid this makes initdb.c even more spaghetti code than it already is.\n\nWhat about those test changes? I can't tell if they are related. \ncreatedb isn't being changed; is that test change related or separate?\n\n> v2-0002-doc-building-without-ICU-is-not-recommended.patch\n> Target: PG 15+\n> Fix the documentation that --without-icu is a marginal configuration.\n> \n> v2-0003-Build-with-ICU-by-default.patch\n> Target: PG 16+\n> Build with ICU by default as already done for readline and zlib libraries.\n\nWe are not going to make these kinds of changes specifically for ICU. \nI'd say, for example, the same applies to --with-openssl and --with-lz4, \nand probably others. If this is an issue, then we need a more general \napproach than just ICU. This should be a separate thread in any case.\n\n\n",
"msg_date": "Thu, 24 Nov 2022 08:41:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "Is this feedback enough to focus the work on the right things?\n\nI feel like Marina Polyakova pointed out some real confusing behaviour\nand perhaps there's a way to solve them by focusing on one at a time\nwithout making large changes in the code.\n\nPerhaps an idea would be to have each module provide two functions, one\nwhich is called early and signals an error if that module's parameters\nare provided when it's not compiled in, and a second which verifies\nthat the parameters are consistent at the point in time where that's\nappropriate. (Not entirely unlike what we do for GUCs, though simpler)\n\nIf every module did that consistently then it would avoid making the\nchanges \"unprincipled\" or \"spaghetti\" though frankly I find words like\nthat not very helpful to someone receiving that feedback.\n\nThe patch is obviously not ready for commit now but it also seems like\nthe feedback has not been really sufficient for Marina Polyakova to\nmake progress either.\n\n\n--\nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 13:59:32 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
},
{
"msg_contents": "This patch has now been waiting for author since December, with the thread\nstalled. I am marking this returned with feedback for now, please feel free to\nre-submit the patch to a future CF when there is renewed interest in working on\nthis.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:33:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of checking ICU options in initdb and create database"
}
] |
[
{
"msg_contents": "Respected Sir/Madam\nI am Joseph Raj Vishal, a Computer Science undergrad. I have just entered\nthird year at Dayananda Sagar College of Engineering in Bengaluru. I am new\nto open source contributions but I am well aware of Python, Java, Django,\nPHP, SQL, Javascript,Kotlin and Android. I would love to contribute to your\norganisation but I would need your help getting started.\nHoping to hear from you soon\n\nRegards\nJoseph Raj Vishal\n\n\nRespected Sir/MadamI am Joseph Raj Vishal, a Computer \nScience undergrad. I have just entered third year at Dayananda Sagar \nCollege of Engineering in Bengaluru. I am new to open source \ncontributions but I am well aware of Python, Java, Django, PHP, SQL, \nJavascript,Kotlin and Android. I would love to contribute to your \norganisation but I would need your help getting started.Hoping to hear from you soonRegardsJoseph Raj Vishal",
"msg_date": "Sat, 29 Oct 2022 20:59:53 +0530",
"msg_from": "1DS20CS093 Joseph Raj Vishal <josephrvishal@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to started with Contributions"
},
{
"msg_contents": "HI Joseph:\n\nMaybe the following links are helpful.\n\nhttps://www.postgresql.org/message-id/Pine.BSF.4.21.9912052011040.823-100000@thelab.hub.org\n\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n\nOn Sun, Oct 30, 2022 at 3:42 PM 1DS20CS093 Joseph Raj Vishal <\njosephrvishal@gmail.com> wrote:\n\n> Respected Sir/Madam\n> I am Joseph Raj Vishal, a Computer Science undergrad. I have just entered\n> third year at Dayananda Sagar College of Engineering in Bengaluru. I am new\n> to open source contributions but I am well aware of Python, Java, Django,\n> PHP, SQL, Javascript,Kotlin and Android. I would love to contribute to your\n> organisation but I would need your help getting started.\n> Hoping to hear from you soon\n>\n> Regards\n> Joseph Raj Vishal\n>\n\n\n\n-- \nBest Regards\nAndy Fan\n\nHI Joseph:Maybe the following links are helpful. https://www.postgresql.org/message-id/Pine.BSF.4.21.9912052011040.823-100000@thelab.hub.org https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F On Sun, Oct 30, 2022 at 3:42 PM 1DS20CS093 Joseph Raj Vishal <josephrvishal@gmail.com> wrote:\nRespected Sir/MadamI am Joseph Raj Vishal, a Computer \nScience undergrad. I have just entered third year at Dayananda Sagar \nCollege of Engineering in Bengaluru. I am new to open source \ncontributions but I am well aware of Python, Java, Django, PHP, SQL, \nJavascript,Kotlin and Android. I would love to contribute to your \norganisation but I would need your help getting started.Hoping to hear from you soonRegardsJoseph Raj Vishal\n\n\n\n-- Best RegardsAndy Fan",
"msg_date": "Sun, 30 Oct 2022 17:16:50 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to started with Contributions"
},
{
"msg_contents": "Thanks, I'll check them out.\n\nOn Sun, 30 Oct, 2022, 2:47 pm Andy Fan, <zhihui.fan1213@gmail.com> wrote:\n\n> HI Joseph:\n>\n> Maybe the following links are helpful.\n>\n>\n> https://www.postgresql.org/message-id/Pine.BSF.4.21.9912052011040.823-100000@thelab.hub.org\n>\n> https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\n>\n> On Sun, Oct 30, 2022 at 3:42 PM 1DS20CS093 Joseph Raj Vishal <\n> josephrvishal@gmail.com> wrote:\n>\n>> Respected Sir/Madam\n>> I am Joseph Raj Vishal, a Computer Science undergrad. I have just entered\n>> third year at Dayananda Sagar College of Engineering in Bengaluru. I am new\n>> to open source contributions but I am well aware of Python, Java, Django,\n>> PHP, SQL, Javascript,Kotlin and Android. I would love to contribute to your\n>> organisation but I would need your help getting started.\n>> Hoping to hear from you soon\n>>\n>> Regards\n>> Joseph Raj Vishal\n>>\n>\n>\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nThanks, I'll check them out. On Sun, 30 Oct, 2022, 2:47 pm Andy Fan, <zhihui.fan1213@gmail.com> wrote:HI Joseph:Maybe the following links are helpful. https://www.postgresql.org/message-id/Pine.BSF.4.21.9912052011040.823-100000@thelab.hub.org https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F On Sun, Oct 30, 2022 at 3:42 PM 1DS20CS093 Joseph Raj Vishal <josephrvishal@gmail.com> wrote:\nRespected Sir/MadamI am Joseph Raj Vishal, a Computer \nScience undergrad. I have just entered third year at Dayananda Sagar \nCollege of Engineering in Bengaluru. I am new to open source \ncontributions but I am well aware of Python, Java, Django, PHP, SQL, \nJavascript,Kotlin and Android. I would love to contribute to your \norganisation but I would need your help getting started.Hoping to hear from you soonRegardsJoseph Raj Vishal\n\n\n\n-- Best RegardsAndy Fan",
"msg_date": "Sun, 30 Oct 2022 20:15:25 +0530",
"msg_from": "1DS20CS093 Joseph Raj Vishal <josephrvishal@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to started with Contributions"
}
] |
[
{
"msg_contents": "Hi,\n\nAs part of [1] I made IOs-in-progress be tracked by resowner.c. Benchmarking\nunfortunately showed that to have a small impact on workloads that often have\nto read data, but where that data is guaranteed to be in the kernel cache.\n\nI was a bit surprised, given that we also use the resowner.c mechanism for\nbuffer pins, which are obviously more common. But those (and e.g. relache\nreferences) actually also show up in profiles...\n\nThe obvious answeriis to have a few \"embedded\" elements in each ResourceArray,\nso that no allocation is needed for the first few remembered objects in each\ncategory. In a prototype I went with four, since that avoided allocations for\ntrivial queries. That works nicely, delivering small but measurable speedups.\n\nHowever, that approach does increase the size of a ResourceOwner. I don't know\nif it matters that much, my prototype made the size go from 544 to 928 bytes -\nwhich afaict would basically be free currently, because of aset.c rounding\nup. But it'd take just two more ResourceArrays to go above that boundary.\n\nstruct ResourceArray {\n Datum * itemsarr; /* 0 8 */\n Datum invalidval; /* 8 8 */\n uint32 capacity; /* 16 4 */\n uint32 nitems; /* 20 4 */\n uint32 maxitems; /* 24 4 */\n uint32 lastidx; /* 28 4 */\n Datum initialarr[4]; /* 32 32 */\n\n /* size: 64, cachelines: 1, members: 7 */\n};\n\n\nOne way to reduce the size increase would be to use the space for initialarr\nto store variables we don't need while initialarr is used. E.g. itemsarr,\nmaxitems, lastarr are candidates. But I suspect that the code complication\nisn't worth it.\n\n\nA different approach could be to not go for the \"embedded initial elements\"\napproach, but instead to not delete resource owners / resource arrays inside\nResourceOwnerDelete(). We could stash them in a bounded list of resource\nowners, to be reused by ResourceOwnerCreate(). We do end up creating a\nseveral resource owners even for the simplest queries.\n\nThe advantage of that scheme is that it'd save more and that we'd only reserve\nspace for ResourceArrays that are actually used in the current workload -\noften the majority of arrays won't be.\n\nA potential problem would be that we don't want to use the \"hashing\" style\nResourceArrays forever, I don't think they'll be as fast for other cases. But\nwe could reset the arrays when they get large.\n\nGreetings,\n\nAndres Freund\n\nhttps://www.postgresql.org/message-id/20221029025420.eplyow6k7tgu6he3%40awork3.anarazel.de\n\n\n",
"msg_date": "Sat, 29 Oct 2022 13:00:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "resowner \"cold start\" overhead"
},
{
"msg_contents": "At Sat, 29 Oct 2022 13:00:25 -0700, Andres Freund <andres@anarazel.de> wrote in \n> One way to reduce the size increase would be to use the space for initialarr\n> to store variables we don't need while initialarr is used. E.g. itemsarr,\n> maxitems, lastarr are candidates. But I suspect that the code complication\n> isn't worth it.\n\n+1\n\n> A different approach could be to not go for the \"embedded initial elements\"\n> approach, but instead to not delete resource owners / resource arrays inside\n> ResourceOwnerDelete(). We could stash them in a bounded list of resource\n> owners, to be reused by ResourceOwnerCreate(). We do end up creating a\n> several resource owners even for the simplest queries.\n\nWe often do end up creating several resource owners that aquires not\nan element at all . On the other hand, a few resource owners\nsometimes grown up to 2048 (several times) or 4096 (one time) elements\ndruing a run of the regressiont tests. (I saw catlist, tupdesc and\nrelref grown to 2048 or more elements.)\n\n> The advantage of that scheme is that it'd save more and that we'd only reserve\n> space for ResourceArrays that are actually used in the current workload -\n> often the majority of arrays won't be.\n\nThus I believe preserving resource owners works well. Preserving\nresource arrays also would work for the time efficiency, but some\nresource owners may end up keeping large amount of memory\nunnecessarily most of the time for the backend lifetime. I guess that\nthe amount is far less than the possible bloat by catcache..\n\n> A potential problem would be that we don't want to use the \"hashing\" style\n> ResourceArrays forever, I don't think they'll be as fast for other cases. But\n> we could reset the arrays when they get large.\n\nI'm not sure linear search (am I correct?) doesn't harm for 2048 or\nmore elements. I think that the \"hashing\" style doesn't prevent the\narrays from being reset (free-d) at transaction end (or at resource\nowner deletion). That allows releasing unused elements while in\ntransaction but I'm not sure we need to be so keen to reclaim space\nduring a transaction.\n\n> Greetings,\n> \n> Andres Freund\n> \n> https://www.postgresql.org/message-id/20221029025420.eplyow6k7tgu6he3%40awork3.anarazel.de\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:28:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: resowner \"cold start\" overhead"
},
{
"msg_contents": "On 31/10/2022 04:28, Kyotaro Horiguchi wrote:\n> At Sat, 29 Oct 2022 13:00:25 -0700, Andres Freund <andres@anarazel.de> wrote in\n>> One way to reduce the size increase would be to use the space for initialarr\n>> to store variables we don't need while initialarr is used. E.g. itemsarr,\n>> maxitems, lastarr are candidates. But I suspect that the code complication\n>> isn't worth it.\n> \n> +1\n> \n>> A different approach could be to not go for the \"embedded initial elements\"\n>> approach, but instead to not delete resource owners / resource arrays inside\n>> ResourceOwnerDelete(). We could stash them in a bounded list of resource\n>> owners, to be reused by ResourceOwnerCreate(). We do end up creating a\n>> several resource owners even for the simplest queries.\n> \n> We often do end up creating several resource owners that aquires not\n> an element at all . On the other hand, a few resource owners\n> sometimes grown up to 2048 (several times) or 4096 (one time) elements\n> druing a run of the regressiont tests. (I saw catlist, tupdesc and\n> relref grown to 2048 or more elements.)\n> \n>> The advantage of that scheme is that it'd save more and that we'd only reserve\n>> space for ResourceArrays that are actually used in the current workload -\n>> often the majority of arrays won't be.\n> \n> Thus I believe preserving resource owners works well. Preserving\n> resource arrays also would work for the time efficiency, but some\n> resource owners may end up keeping large amount of memory\n> unnecessarily most of the time for the backend lifetime. I guess that\n> the amount is far less than the possible bloat by catcache..\n> \n>> A potential problem would be that we don't want to use the \"hashing\" style\n>> ResourceArrays forever, I don't think they'll be as fast for other cases. But\n>> we could reset the arrays when they get large.\n> \n> I'm not sure linear search (am I correct?) doesn't harm for 2048 or\n> more elements. I think that the \"hashing\" style doesn't prevent the\n> arrays from being reset (free-d) at transaction end (or at resource\n> owner deletion). That allows releasing unused elements while in\n> transaction but I'm not sure we need to be so keen to reclaim space\n> during a transaction.\n\nWhat do you think of my ResourceOwner refactoring patches [1]? Reminded \nby this, I rebased and added it to the upcoming commitfest again.\n\nWith that patch, all resources are stored in the same array and hash. \nThe array is part of ResourceOwnerData, so it saves the allocation \noverhead, like the \"initialarr\" that you suggested. And it always uses \nthe array for recently remembered resources, and spills over to the hash \nfor more long-lived resources.\n\nAndres, could you repeat your benchmark with [1], to see if it helps?\n\n[1] \nhttps://www.postgresql.org/message-id/2e10b71b-352e-b97b-1e47-658e2669cecb@iki.fi\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:05:32 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: resowner \"cold start\" overhead"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 11:05:32 +0100, Heikki Linnakangas wrote:\n> What do you think of my ResourceOwner refactoring patches [1]? Reminded by\n> this, I rebased and added it to the upcoming commitfest again.\n\n> With that patch, all resources are stored in the same array and hash. The\n> array is part of ResourceOwnerData, so it saves the allocation overhead,\n> like the \"initialarr\" that you suggested. And it always uses the array for\n> recently remembered resources, and spills over to the hash for more\n> long-lived resources.\n> \n> Andres, could you repeat your benchmark with [1], to see if it helps?\n>\n> [1] https://www.postgresql.org/message-id/2e10b71b-352e-b97b-1e47-658e2669cecb@iki.fi\n\nJust for future readers of this thread: Replied on the other thread.\n\nIt does seem to address the performance issue, but I have some architectural\nconcerns.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:16:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: resowner \"cold start\" overhead"
}
] |
[
{
"msg_contents": "Motivation:\n\nWe haven't fully solved the changing collation-provider problem. An\nupgrade of the OS may change the version of libc or icu, and that might\naffect the collation, which could leave you with various corrupt\ndatabase objects including:\n\n * indexes\n * constraints\n * range types or multiranges (or other types dependent\n on collation for internal consistency)\n * materialized views\n * partitioned tables (range or hash)\n\nThere's discussion about trying to reliably detect these changes and\nremedy them. But there are major challenges; for instance, glibc\ndoesn't give a reliable signal that a collation may have changed, which\nwould leave us with a lot of false positives and create a new set of\nproblems (e.g. reindexing when it's unnecessary). And even with ICU, we\ndon't have a way to support multiple versions of a provider or of a\nsingle collation, so trying to upgrade would still be a hassle.\n\nProposal:\n\nAdd in some tools to make it easier for administrators to find out if\nthey are at risk and solve the problem for themselves in a systematic\nway.\n\nPatches:\n\n 0001: Treat \"default\" collation as unpinned, so that entries in\npg_depend are created. The rationale is that, since the \"default\"\ncollation can change, it's not really an immutable system object, and\nit's worth tracking which objects are affected by it. It seems to bloat\npg_depend by about 5-10% though -- that doesn't seem great, but I'm not\nsure if it's a real problem or not.\n\n 0002: Enable pg_collation_actual_version() to work on the default\ncollation (oid=100) so that it doesn't need to be treated as a special\ncase.\n\n 0003: Fix ALTER COLLATION \"default\" REFRESH VERSION, which currently\nthrows an unhelpful internal error. Instead, issue a more helpful error\nthat suggests \"ALTER DATABASE ... REFRESH COLLATION VERSION\" instead.\n\n 0004: Add system views:\n pg_collation_versions: quickly see the current (from the catalog)\nand actual (from the provider) versions of each collation\n pg_collation_dependencies: map of objects to the collations they\ndepend on\n\nAlong with these patches, you can use some tricks to verify data, such\nas /contrib/amcheck; or fix the data with things like:\n\n * REINDEX\n * VACUUM FULL/TRUNCATE/CLUSTER\n * REFRESH MATERIALIZED VIEW\n\nAnd then refresh the collation version when you're confident that your\ndata is valid.\n\nTODO:\n\n * The dependencies view is not rigorously complete, because the\ndirected dependency graph doesn't quite establish an \"affected by\"\nrelationship. One exception is that a composite type doesn't depend on\nits associated relation, so a composite type over a range type doesn't\ndepend on the range type.\n * Consider adding in some verification helpers that can verify that a\nvalue is still valid (e.g. a range type that depends on a collation\nmight have corrupt values). We could have a collation verifier for\ntypes that are collation-dependenent, or perhaps just go through the\ninput and output functions and catch any errors.\n * Consider better tracking of which collation versions were active on\na particular object since the last REINDEX (or REFRESH MATERIALIZED\nVIEW, TRUNCATE, or other command that would remove any trace of data\naffected by the previous collation version).\n\nRegards,\n\tJeff Davis",
"msg_date": "Sat, 29 Oct 2022 21:41:16 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "16: Collation versioning and dependency helpers"
},
{
"msg_contents": "On Sun, Oct 30, 2022 at 5:41 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> We haven't fully solved the changing collation-provider problem. An\n> upgrade of the OS may change the version of libc or icu, and that might\n> affect the collation, which could leave you with various corrupt\n> database objects including:\n>\n> * indexes\n> * constraints\n> * range types or multiranges (or other types dependent\n> on collation for internal consistency)\n> * materialized views\n> * partitioned tables (range or hash)\n\nCheck.\n\n> There's discussion about trying to reliably detect these changes and\n> remedy them. But there are major challenges; for instance, glibc\n> doesn't give a reliable signal that a collation may have changed, which\n> would leave us with a lot of false positives and create a new set of\n> problems (e.g. reindexing when it's unnecessary). And even with ICU, we\n> don't have a way to support multiple versions of a provider or of a\n> single collation, so trying to upgrade would still be a hassle.\n\nFWIW some experimental code for multi-version ICU is proposed for\ndiscussion here:\n\nhttps://commitfest.postgresql.org/40/3956/\n\n> Proposal:\n>\n> Add in some tools to make it easier for administrators to find out if\n> they are at risk and solve the problem for themselves in a systematic\n> way.\n\nExcellent goal.\n\n> Patches:\n>\n> 0001: Treat \"default\" collation as unpinned, so that entries in\n> pg_depend are created. The rationale is that, since the \"default\"\n> collation can change, it's not really an immutable system object, and\n> it's worth tracking which objects are affected by it. It seems to bloat\n> pg_depend by about 5-10% though -- that doesn't seem great, but I'm not\n> sure if it's a real problem or not.\n\nFWIW we did this (plus a lot more) in the per-index version tracking\nfeature reverted from 14.\n\n> 0002: Enable pg_collation_actual_version() to work on the default\n> collation (oid=100) so that it doesn't need to be treated as a special\n> case.\n\nMakes sense.\n\n> 0003: Fix ALTER COLLATION \"default\" REFRESH VERSION, which currently\n> throws an unhelpful internal error. Instead, issue a more helpful error\n> that suggests \"ALTER DATABASE ... REFRESH COLLATION VERSION\" instead.\n\nMakes sense.\n\n> 0004: Add system views:\n> pg_collation_versions: quickly see the current (from the catalog)\n> and actual (from the provider) versions of each collation\n> pg_collation_dependencies: map of objects to the collations they\n> depend on\n>\n> Along with these patches, you can use some tricks to verify data, such\n> as /contrib/amcheck; or fix the data with things like:\n>\n> * REINDEX\n> * VACUUM FULL/TRUNCATE/CLUSTER\n> * REFRESH MATERIALIZED VIEW\n>\n> And then refresh the collation version when you're confident that your\n> data is valid.\n\nHere you run into an argument that we had many times in that cycle:\nwhat's the point of views that suffer both false positives and false\nnegatives?\n\n> TODO:\n\n> * Consider better tracking of which collation versions were active on\n> a particular object since the last REINDEX (or REFRESH MATERIALIZED\n> VIEW, TRUNCATE, or other command that would remove any trace of data\n> affected by the previous collation version).\n\nRight, the per-object dependency tracking feature, reverted from 14,\naimed to do exactly that. It fell down on (1) some specific bugs that\nwere hard to fix, like dependencies inherited via composite types when\nyou change the composite type, and (2) doubt expressed by Tom, and\nearlier Stephen, that pg_depend was a good place to store version\ninformation.\n\n\n",
"msg_date": "Sun, 30 Oct 2022 19:10:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 16: Collation versioning and dependency helpers"
},
{
"msg_contents": "On Sun, 2022-10-30 at 19:10 +1300, Thomas Munro wrote:\n> FWIW we did this (plus a lot more) in the per-index version tracking\n> feature reverted from 14.\n\nThank you. I will catch up on that patch/thread.\n\n> > 0002: Enable pg_collation_actual_version() to work on the default\n> \n> Makes sense.\n> \n> > 0003: Fix ALTER COLLATION \"default\" REFRESH VERSION, which \n> \n> Makes sense.\n\nCommitted these two small changes.\n\n> > 0004: Add system views:\n> > pg_collation_versions: quickly see the current (from the\n> > catalog)\n> > and actual (from the provider) versions of each collation\n> > pg_collation_dependencies: map of objects to the collations\n> > they\n> > depend on\n> > \n> > Along with these patches, you can use some tricks to verify data,\n> > such\n> > as /contrib/amcheck; or fix the data with things like:\n> > \n> > * REINDEX\n> > * VACUUM FULL/TRUNCATE/CLUSTER\n> > * REFRESH MATERIALIZED VIEW\n> > \n> > And then refresh the collation version when you're confident that\n> > your\n> > data is valid.\n> \n> Here you run into an argument that we had many times in that cycle:\n> what's the point of views that suffer both false positives and false\n> negatives?\n\nThe pg_collation_versions view is just a convenience, useful because\nthe default collation isn't represented normally in pg_collation so it\nneeds to be special-cased.\n\nI could see how it would be tricky to precisely track the dependencies\nthrough composite types (that is, create the proper pg_depend records),\nbut to just provide a view of the affected-by relationship seems more\ndoable. I'll review the previous discussion and see what I come up\nwith.\n\nOf course, the view will just show an \"affected by\" relationship, it\nwon't show which objects are actually in violation of the current\ncollation version. But it at least gives the administrator a starting\nplace.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:36:54 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: 16: Collation versioning and dependency helpers"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 16:36:54 -0700, Jeff Davis wrote:\n> Committed these two small changes.\n\nFWIW, as it stands cfbot can't apply the remaining changes:\nhttp://cfbot.cputube.org/patch_40_3977.log\n\nPerhaps worth posting a new version? Or are the remaining patches abandoned in\nfavor of the other threads?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:53:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: 16: Collation versioning and dependency helpers"
},
{
"msg_contents": "On Tue, 2022-12-06 at 10:53 -0800, Andres Freund wrote:\n> Perhaps worth posting a new version? Or are the remaining patches\n> abandoned in\n> favor of the other threads?\n\nMarked what is there as committed, and the remainder is abandoned in\nfavor of other threads.\n\nThanks,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 06 Dec 2022 11:37:19 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: 16: Collation versioning and dependency helpers"
}
] |
[
{
"msg_contents": "postgres=# \\set QUIET \nCREATE TABLE stxdinp (i int, j int) PARTITION BY RANGE(i);\nCREATE TABLE stxdinp1 PARTITION OF stxdinp FOR VALUES FROM (1)TO(10);\nINSERT INTO stxdinp SELECT generate_series(1,9)a;\nCREATE STATISTICS stxdp ON i,j FROM stxdinp;\nANALYZE stxdinp;\nexplain SELECT i, j, COUNT(1) FROM ONLY stxdinp GROUP BY 1,2;\nERROR: cache lookup failed for statistics object 4060843\n\nIt's evidently an issue with 269b532ae (Add stxdinherit flag to\npg_statistic_ext_data)\n\n(gdb) bt\n#0 pg_re_throw () at elog.c:1795\n#1 0x000000000096578a in errfinish (filename=<optimized out>, filename@entry=0xaf0442 \"extended_stats.c\", lineno=lineno@entry=2467, funcname=funcname@entry=0xaf0720 <__func__.28166> \"statext_expressions_load\") at elog.c:588\n#2 0x00000000007efd07 in statext_expressions_load (stxoid=3914359, inh=<optimized out>, idx=idx@entry=0) at extended_stats.c:2467\n#3 0x0000000000913947 in examine_variable (root=root@entry=0x2b5b530, node=node@entry=0x2b88820, varRelid=varRelid@entry=7, vardata=vardata@entry=0x7ffe5baa2f00) at selfuncs.c:5264\n#4 0x00000000009141ae in get_restriction_variable (root=root@entry=0x2b5b530, args=args@entry=0x2b88a30, varRelid=varRelid@entry=7, vardata=vardata@entry=0x7ffe5baa2f90, other=other@entry=0x7ffe5baa2f88, \n varonleft=varonleft@entry=0x7ffe5baa2f87) at selfuncs.c:4848\n#5 0x0000000000915535 in eqsel_internal (fcinfo=<optimized out>, negate=negate@entry=false) at selfuncs.c:263\n#6 0x00000000009155f6 in eqsel (fcinfo=<optimized out>) at selfuncs.c:226\n#7 0x000000000096a373 in FunctionCall4Coll (flinfo=flinfo@entry=0x7ffe5baa3090, collation=collation@entry=0, arg1=arg1@entry=45462832, arg2=arg2@entry=1320, arg3=arg3@entry=45648432, arg4=arg4@entry=7) at fmgr.c:1198\n#8 0x000000000096a92a in OidFunctionCall4Coll (functionId=<optimized out>, collation=collation@entry=0, arg1=arg1@entry=45462832, arg2=arg2@entry=1320, arg3=arg3@entry=45648432, arg4=arg4@entry=7) at fmgr.c:1434\n#9 0x000000000077e759 in restriction_selectivity (root=root@entry=0x2b5b530, operatorid=operatorid@entry=1320, args=0x2b88a30, inputcollid=0, varRelid=varRelid@entry=7) at plancat.c:1880\n#10 0x0000000000728dc9 in clause_selectivity_ext (root=root@entry=0x2b5b530, clause=0x2b889d8, clause@entry=0x2b86e88, varRelid=varRelid@entry=7, jointype=jointype@entry=JOIN_INNER, sjinfo=sjinfo@entry=0x0, \n use_extended_stats=use_extended_stats@entry=true) at clausesel.c:875\n#11 0x00000000007291b6 in clauselist_selectivity_ext (root=root@entry=0x2b5b530, clauses=0x2b88fc0, varRelid=7, jointype=jointype@entry=JOIN_INNER, sjinfo=sjinfo@entry=0x0, use_extended_stats=use_extended_stats@entry=true)\n at clausesel.c:185\n#12 0x000000000072962e in clauselist_selectivity (root=root@entry=0x2b5b530, clauses=<optimized out>, varRelid=<optimized out>, jointype=jointype@entry=JOIN_INNER, sjinfo=sjinfo@entry=0x0) at clausesel.c:108\n#13 0x000000000072fb1d in get_parameterized_baserel_size (root=root@entry=0x2b5b530, rel=rel@entry=0x2b73900, param_clauses=param_clauses@entry=0x2b88f68) at costsize.c:5015\n#14 0x00000000007836f6 in get_baserel_parampathinfo (root=root@entry=0x2b5b530, baserel=baserel@entry=0x2b73900, required_outer=required_outer@entry=0x2b86478) at relnode.c:1346\n#15 0x0000000000776819 in create_seqscan_path (root=root@entry=0x2b5b530, rel=rel@entry=0x2b73900, required_outer=required_outer@entry=0x2b86478, parallel_workers=parallel_workers@entry=0) at pathnode.c:937\n#16 0x000000000077a32c in reparameterize_path (root=root@entry=0x2b5b530, path=path@entry=0x2b847b0, required_outer=required_outer@entry=0x2b86478, loop_count=loop_count@entry=1) at pathnode.c:3872\n#17 0x00000000007249bc in get_cheapest_parameterized_child_path (root=root@entry=0x2b5b530, rel=<optimized out>, required_outer=required_outer@entry=0x2b86478) at allpaths.c:1996\n#18 0x0000000000727619 in add_paths_to_append_rel (root=root@entry=0x2b5b530, rel=rel@entry=0x2b6a6a8, live_childrels=live_childrels@entry=0x2b858e8) at allpaths.c:1597\n#19 0x0000000000728084 in set_append_rel_pathlist (root=root@entry=0x2b5b530, rel=rel@entry=0x2b6a6a8, rti=rti@entry=6, rte=rte@entry=0x2b5e1c0) at allpaths.c:1270\n#20 0x0000000000727e17 in set_rel_pathlist (root=root@entry=0x2b5b530, rel=0x2b6a6a8, rti=rti@entry=6, rte=0x2b5e1c0) at allpaths.c:483\n#21 0x0000000000727f8a in set_base_rel_pathlists (root=root@entry=0x2b5b530) at allpaths.c:355\n#22 0x00000000007286fd in make_one_rel (root=root@entry=0x2b5b530, joinlist=joinlist@entry=0x2b6fd98) at allpaths.c:225\n#23 0x00000000007512d5 in query_planner (root=root@entry=0x2b5b530, qp_callback=qp_callback@entry=0x75300a <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffe5baa3670) at planmain.c:276\n#24 0x00000000007589ec in grouping_planner (root=root@entry=0x2b5b530, tuple_fraction=<optimized out>, tuple_fraction@entry=0) at planner.c:1467\n#25 0x000000000075a5f2 in subquery_planner (glob=<optimized out>, parse=parse@entry=0x2b23c08, parent_root=parent_root@entry=0x2736768, hasRecursion=hasRecursion@entry=false, tuple_fraction=<optimized out>) at planner.c:1044\n#26 0x0000000000726567 in set_subquery_pathlist (root=root@entry=0x2736768, rel=rel@entry=0x2755f30, rti=rti@entry=6, rte=rte@entry=0x28c9980) at allpaths.c:2589\n#27 0x000000000072681c in set_rel_size (root=root@entry=0x2736768, rel=rel@entry=0x2755f30, rti=rti@entry=6, rte=rte@entry=0x28c9980) at allpaths.c:425\n#28 0x0000000000726996 in set_base_rel_sizes (root=root@entry=0x2736768) at allpaths.c:326\n#29 0x0000000000728663 in make_one_rel (root=root@entry=0x2736768, joinlist=joinlist@entry=0x274d038) at allpaths.c:188\n#30 0x00000000007512d5 in query_planner (root=root@entry=0x2736768, qp_callback=qp_callback@entry=0x75300a <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffe5baa39d0) at planmain.c:276\n#31 0x00000000007589ec in grouping_planner (root=root@entry=0x2736768, tuple_fraction=<optimized out>, tuple_fraction@entry=0) at planner.c:1467\n#32 0x000000000075a5f2 in subquery_planner (glob=glob@entry=0x2483430, parse=parse@entry=0x289c7c8, parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)\n at planner.c:1044\n\nI think this is what's needed.\n\ndiff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\nindex 14e0885f19f..4450f0d682f 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -5240,6 +5240,8 @@ examine_variable(PlannerInfo *root, Node *node, int varRelid,\n \t\t\t/* skip stats without per-expression stats */\n \t\t\tif (info->kind != STATS_EXT_EXPRESSIONS)\n \t\t\t\tcontinue;\n+\t\t\tif (info->inherit != rte->inh)\n+\t\t\t\tcontinue;\n \n \t\t\tpos = 0;\n \t\t\tforeach(expr_item, info->exprs)\n\n\n",
"msg_date": "Sun, 30 Oct 2022 12:05:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg15 inherited stats expressions: cache lookup failed for statistics\n object"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 1:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I think this is what's needed.\n>\n> diff --git a/src/backend/utils/adt/selfuncs.c\n> b/src/backend/utils/adt/selfuncs.c\n> index 14e0885f19f..4450f0d682f 100644\n> --- a/src/backend/utils/adt/selfuncs.c\n> +++ b/src/backend/utils/adt/selfuncs.c\n> @@ -5240,6 +5240,8 @@ examine_variable(PlannerInfo *root, Node *node, int\n> varRelid,\n> /* skip stats without per-expression stats */\n> if (info->kind != STATS_EXT_EXPRESSIONS)\n> continue;\n> + if (info->inherit != rte->inh)\n> + continue;\n>\n> pos = 0;\n> foreach(expr_item, info->exprs)\n>\n\nI think we also need to do this when loading the ndistinct value, to\nskip statistics with mismatching stxdinherit in\nestimate_multivariate_ndistinct().\n\nThanks\nRichard\n\nOn Mon, Oct 31, 2022 at 1:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\nI think this is what's needed.\n\ndiff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\nindex 14e0885f19f..4450f0d682f 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -5240,6 +5240,8 @@ examine_variable(PlannerInfo *root, Node *node, int varRelid,\n /* skip stats without per-expression stats */\n if (info->kind != STATS_EXT_EXPRESSIONS)\n continue;\n+ if (info->inherit != rte->inh)\n+ continue;\n\n pos = 0;\n foreach(expr_item, info->exprs) I think we also need to do this when loading the ndistinct value, toskip statistics with mismatching stxdinherit inestimate_multivariate_ndistinct().ThanksRichard",
"msg_date": "Mon, 31 Oct 2022 12:26:22 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 12:26 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Mon, Oct 31, 2022 at 1:05 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>> I think this is what's needed.\n>>\n>> diff --git a/src/backend/utils/adt/selfuncs.c\n>> b/src/backend/utils/adt/selfuncs.c\n>> index 14e0885f19f..4450f0d682f 100644\n>> --- a/src/backend/utils/adt/selfuncs.c\n>> +++ b/src/backend/utils/adt/selfuncs.c\n>> @@ -5240,6 +5240,8 @@ examine_variable(PlannerInfo *root, Node *node, int\n>> varRelid,\n>> /* skip stats without per-expression stats */\n>> if (info->kind != STATS_EXT_EXPRESSIONS)\n>> continue;\n>> + if (info->inherit != rte->inh)\n>> + continue;\n>>\n>> pos = 0;\n>> foreach(expr_item, info->exprs)\n>>\n>\n> I think we also need to do this when loading the ndistinct value, to\n> skip statistics with mismatching stxdinherit in\n> estimate_multivariate_ndistinct().\n>\n\nTo be concrete, I mean something like attached.\n\nBTW, I noticed a micro-optimization opportunity in examine_variable that\nwe can fetch the RangeTblEntry for 'onerel' outside the foreach loop\nwhen iterating the extended stats so that we can do it only once rather\nthan for each stat.\n\nThanks\nRichard",
"msg_date": "Mon, 31 Oct 2022 13:12:09 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 01:12:09PM +0800, Richard Guo wrote:\n> BTW, I noticed a micro-optimization opportunity in examine_variable that\n> we can fetch the RangeTblEntry for 'onerel' outside the foreach loop\n> when iterating the extended stats so that we can do it only once rather\n> than for each stat.\n\nIsn't that the kind of thing where we'd better have some regression\ncoverage?\n--\nMichael",
"msg_date": "Mon, 31 Oct 2022 14:33:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Oct 31, 2022 at 01:12:09PM +0800, Richard Guo wrote:\n> > BTW, I noticed a micro-optimization opportunity in examine_variable that\n> > we can fetch the RangeTblEntry for 'onerel' outside the foreach loop\n> > when iterating the extended stats so that we can do it only once rather\n> > than for each stat.\n>\n> Isn't that the kind of thing where we'd better have some regression\n> coverage?\n\n\nYeah, we need to have some regression tests for that. I come up with a\ncase in stats_ext like below\n\nCREATE STATISTICS stxdinp ON (a + 1), a, b FROM stxdinp;\nSELECT * FROM check_estimated_rows('SELECT a + 1, b FROM ONLY stxdinp GROUP\nBY 1, 2');\n\nThis case should be able to cover both expression stats and ndistinct\nstats. Hence, attach v2 patch.\n\nThanks\nRichard",
"msg_date": "Tue, 1 Nov 2022 17:33:24 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 05:33:24PM +0800, Richard Guo wrote:\n> On Mon, Oct 31, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Mon, Oct 31, 2022 at 01:12:09PM +0800, Richard Guo wrote:\n> > > BTW, I noticed a micro-optimization opportunity in examine_variable that\n> > > we can fetch the RangeTblEntry for 'onerel' outside the foreach loop\n> > > when iterating the extended stats so that we can do it only once rather\n> > > than for each stat.\n> >\n> > Isn't that the kind of thing where we'd better have some regression\n> > coverage?\n> \n> Yeah, we need to have some regression tests for that. I come up with a\n> case in stats_ext like below\n\nWell done\n\n> This case should be able to cover both expression stats and ndistinct\n> stats. Hence, attach v2 patch.\n\nThanks for finishing it up.\n\nI added a CF entry and marked RFC.\nThis should be included in v15.1.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:12:40 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I added a CF entry and marked RFC.\n> This should be included in v15.1.\n\nRight, done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Nov 2022 14:35:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 02:35:43PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > I added a CF entry and marked RFC.\n> > This should be included in v15.1.\n> \n> Right, done.\n\nThanks. Yesterday, I realized that the bug was exposed here after we\naccidentally recreated a table as relkind=r rather than relkind=p...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 2 Nov 2022 13:10:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15 inherited stats expressions: cache lookup failed for\n statistics object"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nThe VariableCacheData says nextOid and oidCount are protected by\nOidGenLock. However, we update them without holding the lock on\nOidGenLock in BootStrapXLOG(). Same as nextXid, for other fields\nthat are protected by XidGenLock, it holds the lock, see\nSetTransactionIdLimit().\n\nvoid\nBootStrapXLOG(void)\n{\n [...]\n\n ShmemVariableCache->nextXid = checkPoint.nextXid;\n ShmemVariableCache->nextOid = checkPoint.nextOid;\n ShmemVariableCache->oidCount = 0;\n\n [...]\n\n SetTransactionIdLimit(checkPoint.oldestXid, checkPoint.oldestXidDB);\n\n [...]\n}\n\nI also find a similar code in StartupXLOG(). Why we don't hold the lock\non OidGenLock when updating ShmemVariableCache->nextOid and\nShmemVariableCache->oidCount?\n\nIf the lock is unnecessary, I think adding some comments is better.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 31 Oct 2022 10:48:01 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Lock on ShmemVariableCache fields?"
},
{
"msg_contents": "HI,\n\nOn Oct 31, 2022, 10:48 +0800, Japin Li <japinli@hotmail.com>, wrote:\n>\n> Hi, hackers\n>\n> The VariableCacheData says nextOid and oidCount are protected by\n> OidGenLock. However, we update them without holding the lock on\n> OidGenLock in BootStrapXLOG(). Same as nextXid, for other fields\n> that are protected by XidGenLock, it holds the lock, see\n> SetTransactionIdLimit().\n>\n> void\n> BootStrapXLOG(void)\n> {\n> [...]\n>\n> ShmemVariableCache->nextXid = checkPoint.nextXid;\n> ShmemVariableCache->nextOid = checkPoint.nextOid;\n> ShmemVariableCache->oidCount = 0;\n>\n> [...]\n>\n> SetTransactionIdLimit(checkPoint.oldestXid, checkPoint.oldestXidDB);\n>\n> [...]\n> }\n>\n> I also find a similar code in StartupXLOG(). Why we don't hold the lock\n> on OidGenLock when updating ShmemVariableCache->nextOid and\n> ShmemVariableCache->oidCount?\n>\n> If the lock is unnecessary, I think adding some comments is better.\n>\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n>\nAs its name BootStrapXLOG, it’s used in BootStrap mode to initialize the template database.\nThe process doesn’t speak SQL and the database is not ready.\nThere won’t be concurrent access to variables.\n\nRegards,\nZhang Mingli\n>\n\n\n\n\n\n\n\nHI,\n\nOn Oct 31, 2022, 10:48 +0800, Japin Li <japinli@hotmail.com>, wrote:\n\nHi, hackers\n\nThe VariableCacheData says nextOid and oidCount are protected by\nOidGenLock. However, we update them without holding the lock on\nOidGenLock in BootStrapXLOG(). Same as nextXid, for other fields\nthat are protected by XidGenLock, it holds the lock, see\nSetTransactionIdLimit().\n\nvoid\nBootStrapXLOG(void)\n{\n[...]\n\nShmemVariableCache->nextXid = checkPoint.nextXid;\nShmemVariableCache->nextOid = checkPoint.nextOid;\nShmemVariableCache->oidCount = 0;\n\n[...]\n\nSetTransactionIdLimit(checkPoint.oldestXid, checkPoint.oldestXidDB);\n\n[...]\n}\n\nI also find a similar code in StartupXLOG(). Why we don't hold the lock\non OidGenLock when updating ShmemVariableCache->nextOid and\nShmemVariableCache->oidCount?\n\nIf the lock is unnecessary, I think adding some comments is better.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\nAs its name BootStrapXLOG, it’s used in BootStrap mode to initialize the template database.\nThe process doesn’t speak SQL and the database is not ready.\nThere won’t be concurrent access to variables.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Mon, 31 Oct 2022 14:15:49 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lock on ShmemVariableCache fields?"
},
{
"msg_contents": "\nOn Mon, 31 Oct 2022 at 14:15, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> HI,\n>\n> On Oct 31, 2022, 10:48 +0800, Japin Li <japinli@hotmail.com>, wrote:\n>>\n>> I also find a similar code in StartupXLOG(). Why we don't hold the lock\n>> on OidGenLock when updating ShmemVariableCache->nextOid and\n>> ShmemVariableCache->oidCount?\n>>\n>> If the lock is unnecessary, I think adding some comments is better.\n>>\n> As its name BootStrapXLOG, it’s used in BootStrap mode to initialize the template database.\n> The process doesn’t speak SQL and the database is not ready.\n> There won’t be concurrent access to variables.\n>\n\nThanks for your explanation. I got your mind. So, in theory, we can also update\neverything in ShmemVariableCache without a lock?\n\nFor example, since SetCommitTsLimit() is only used in BootStrapXLog() and\nStartupXLOG(), we can safely remove the code of acquiring/releasing lock?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 31 Oct 2022 15:14:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lock on ShmemVariableCache fields?"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 03:14:54PM +0800, Japin Li wrote:\n> For example, since SetCommitTsLimit() is only used in BootStrapXLog() and\n> StartupXLOG(), we can safely remove the code of acquiring/releasing lock?\n\nLogically yes, I guess that you could go without the LWLock acquired\nin this routine at this early stage of the game. Now, perhaps that's\nnot worth worrying, but removing these locks could impact any external\ncode relying on SetCommitTsLimit() to actually hold them.\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 14:43:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Lock on ShmemVariableCache fields?"
},
{
"msg_contents": "On 2022-Nov-01, Michael Paquier wrote:\n\n> On Mon, Oct 31, 2022 at 03:14:54PM +0800, Japin Li wrote:\n> > For example, since SetCommitTsLimit() is only used in BootStrapXLog() and\n> > StartupXLOG(), we can safely remove the code of acquiring/releasing lock?\n> \n> Logically yes, I guess that you could go without the LWLock acquired\n> in this routine at this early stage of the game. Now, perhaps that's\n> not worth worrying, but removing these locks could impact any external\n> code relying on SetCommitTsLimit() to actually hold them.\n\nMy 0.02€: From an API point of view it makes no sense to remove\nacquisition of the lock in SetCommitTsLimit, particularly given that\nthat function is not at all performance critical. I think the first\nquestion I would ask, when somebody proposes to make a change like that,\nis *why* do they want to make that change. Is it just because we *can*?\nThat doesn't sound, to me, sufficient motivation. You actually\nintroduce *more* complexity, because after such a change any future\nhacker would have to worry about whether their changes are still valid\nconsidering that these struct members are modified unlocked someplace.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n",
"msg_date": "Thu, 10 Nov 2022 18:59:46 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Lock on ShmemVariableCache fields?"
}
] |
[
{
"msg_contents": "As part of the AIO work [1], Andres mentioned to me that he found that\nprefetching tuple memory during hot pruning showed significant wins.\nI'm not proposing anything to improve HOT pruning here, but as a segue\nto get the prefetching infrastructure in so that there are fewer AIO\npatches, I'm proposing we prefetch the next tuple during sequence\nscans in while page mode.\n\nIt turns out the gains are pretty good when we apply this:\n\n-- table with 4 bytes of user columns\ncreate table t as select a from generate_series(1,10000000)a;\nvacuum freeze t;\nselect pg_prewarm('t');\n\nMaster @ a9f8ca600\n# select * from t where a = 0;\nTime: 355.001 ms\nTime: 354.573 ms\nTime: 354.490 ms\nTime: 354.556 ms\nTime: 354.335 ms\n\nMaster + 0001 + 0003:\n# select * from t where a = 0;\nTime: 328.578 ms\nTime: 329.387 ms\nTime: 329.349 ms\nTime: 329.704 ms\nTime: 328.225 ms (avg ~7.7% faster)\n\n-- table with 64 bytes of user columns\ncreate table t2 as\nselect a,a a2,a a3,a a4,a a5,a a6,a a7,a a8,a a9,a a10,a a11,a a12,a\na13,a a14,a a15,a a16\nfrom generate_series(1,10000000)a;\nvacuum freeze t2;\nselect pg_prewarm('t2');\n\nMaster:\n# select * from t2 where a = 0;\nTime: 501.725 ms\nTime: 501.815 ms\nTime: 503.225 ms\nTime: 501.242 ms\nTime: 502.394 ms\n\nMaster + 0001 + 0003:\n# select * from t2 where a = 0;\nTime: 412.076 ms\nTime: 410.669 ms\nTime: 410.490 ms\nTime: 409.782 ms\nTime: 410.843 ms (avg ~22% faster)\n\nThis was tested on an AMD 3990x CPU. I imagine the CPU matters quite a\nbit here. It would be interesting to see if the same or similar gains\ncan be seen on some modern intel chip too.\n\nI believe Thomas wrote the 0001 patch (same as patch in [2]?). I only\nquickly put together the 0003 patch.\n\nI wondered if we might want to add a macro to 0001 that says if\npg_prefetch_mem() is empty or not then use that to #ifdef out the code\nI added to heapam.c. Although, perhaps most compilers will be able to\noptimise away the extra lines that are figuring out what the address\nof the next tuple is.\n\nMy tests above are likely the best case for this. It seems plausible\nto me that if there was a much more complex plan that found a\nreasonable number of tuples and did something with them that we\nwouldn't see the same sort of gains. Also, it also does not seem\nimpossible that the prefetch just results in evicting some\nuseful-to-some-other-exec-node cache line or that the prefetched tuple\ngets flushed out the cache by the time we get around to fetching the\nnext tuple from the scan again due to various other node processing\nthat's occurred since the seq scan was last called. I imagine such\nthings would be indistinguishable from noise, but I've not tested.\n\nI also tried prefetching out by 2 tuples. It didn't help any further\nthan prefetching 1 tuple.\n\nI'll add this to the November CF.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/20210223100344.llw5an2aklengrmn@alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/CA%2BhUKG%2Bpi63ZbcZkYK3XB1pfN%3DkuaDaeV0Ha9E%2BX_p6TTbKBYw%40mail.gmail.com",
"msg_date": "Mon, 31 Oct 2022 16:52:52 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "Hi David,\n\n> I'll add this to the November CF.\n\nThanks for the patch.\n\nI wonder if we can be sure and/or check that there is no performance\ndegradation under different loads and different platforms...\n\nAlso I see 0001 and 0003 but no 0002. Just wanted to double check that\nthere is no patch missing.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:12:22 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 03:12, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I wonder if we can be sure and/or check that there is no performance\n> degradation under different loads and different platforms...\n\nDifferent platforms would be good. Certainly, 1 platform isn't a good\nenough indication that this is going to be useful.\n\nAs for different loads. I imagine the worst case for this will be that\nthe prefetched tuple is flushed from the cache by some other operation\nin the plan making the prefetch useless.\n\nI tried the following so that we read 1 million tuples from a Sort\nnode before coming back and reading another tuple from the seqscan.\n\ncreate table a as select 1 as a from generate_series(1,2) a;\ncreate table b as select 1 as a from generate_series(1,10000000) a;\nvacuum freeze a,b;\n\nselect pg_prewarm('a'),pg_prewarm('b');\nset work_mem = '256MB';\nselect * from a, lateral (select * from b order by a) b offset 20000000;\n\nMaster (@ a9f8ca600)\nTime: 1414.590 ms (00:01.415)\nTime: 1373.584 ms (00:01.374)\nTime: 1373.057 ms (00:01.373)\nTime: 1383.033 ms (00:01.383)\nTime: 1378.865 ms (00:01.379)\n\nMaster + 0001 + 0003:\nTime: 1352.726 ms (00:01.353)\nTime: 1348.306 ms (00:01.348)\nTime: 1358.033 ms (00:01.358)\nTime: 1354.348 ms (00:01.354)\nTime: 1353.971 ms (00:01.354)\n\nAs I'd have expected, I see no regression. It's hard to imagine we'd\nbe able to measure the regression over the overhead of some operation\nthat would evict everything out of cache.\n\nFWIW, this CPU has a 256MB L3 cache and the Sort node's EXPLAIN\nANALYZE looks like:\n\nSort Method: quicksort Memory: 262144kB\n\n> Also I see 0001 and 0003 but no 0002. Just wanted to double check that\n> there is no patch missing.\n\nPerhaps I should resequence the patches to avoid confusion. I didn't\nsend 0002 on purpose. The 0002 is Andres' patch to prefetch during HOT\npruning. Here I'm only interested in seeing if we can get the\npg_prefetch_mem macros in core to reduce the number of AIO patches by\n1.\n\nAnother thing about this is that I'm really only fetching the first\ncache line of the tuple. All columns in the t2 table (from the\nearlier email) are fixed width, so accessing the a16 column is a\ncached offset. I ran a benchmark using the same t2 table as my earlier\nemail, i.e:\n\n-- table with 64 bytes of user columns\ncreate table t2 as\nselect a,a a2,a a3,a a4,a a5,a a6,a a7,a a8,a a9,a a10,a a11,a a12,a\na13,a a14,a a15,a a16\nfrom generate_series(1,10000000)a;\nvacuum freeze t2;\n\nMy test is to run 16 queries changing the WHERE clause each time to\nhave WHERE a = 0, then WHERE a2 = 0 ... WHERE a16 = 0. I wanted to\nknow if prefetching only the first cache line of the tuple would be\nless useful when we require evaluation of say, the \"a16\" column vs the\n\"a\" column.\n\nThe times below (in milliseconds) are what I got from a 10-second pgbench run:\n\ncolumn master patched\na 490.571 409.748\na2 428.004 430.927\na3 449.156 453.858\na4 474.945 479.73\na5 514.646 507.809\na6 517.525 519.956\na7 543.587 539.023\na8 562.718 559.387\na9 585.458 584.63\na10 609.143 604.606\na11 645.273 638.535\na12 658.848 657.377\na13 696.395 685.389\na14 702.779 716.722\na15 727.161 723.567\na16 756.186 749.396\n\nI'm not sure how to explain why only the \"a\" column seems to improve\nand the rest seem mostly unaffected.\n\nDavid",
"msg_date": "Tue, 1 Nov 2022 11:17:14 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "Hi:\n\n> Different platforms would be good. Certainly, 1 platform isn't a good\n> enough indication that this is going to be useful.\n\nI just have a different platforms at hand, Here is my test with\nIntel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz.\nshared_buffers has been set to big enough to hold all the data.\n\ncolumns Master Patched Improvement\na 310.931 289.251 6.972608071\na2 329.577 299.975 8.981816085\na3 336.887 313.502 6.941496704\na4 352.099 325.345 7.598431123\na5 358.582 336.486 6.162049406\na6 375.004 349.12 6.902326375\na7 379.699 362.998 4.398484062\na8 391.911 371.41 5.231034597\na9 404.3 383.779 5.075686372\na10 425.48 396.114 6.901852026\na11 449.944 431.826 4.026723326\na12 461.876 443.579 3.961452857\na13 470.59 460.237 2.20000425\na14 483.332 467.078 3.362905829\na15 490.798 472.262 3.776706507\na16 503.321 484.322 3.774728255\n\nBy theory, Why does the preferch make thing better? I am asking this\nbecause I think we need to read the data from buffer to cache line once\nin either case (I'm obvious wrong in face of the test result.)\n\nAnother simple point is the below styles are same. But the format 3 looks\nclearer than others for me. It can tell code reader more stuffs. just fyi.\n\n pg_prefetch_mem(PageGetItem((Page) dp, lpp));\n pg_prefetch_mem(tuple->t_data);\n pg_prefetch_mem((scan->rs_ctup.t_data);\n\n-- \nBest Regards\nAndy Fan\n\n\n",
"msg_date": "Tue, 1 Nov 2022 19:08:52 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 12:09 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> By theory, Why does the preferch make thing better? I am asking this\n> because I think we need to read the data from buffer to cache line once\n> in either case (I'm obvious wrong in face of the test result.)\n\nCPUs have several different kinds of 'hardware prefetchers' (worth\nreading about), that look out for sequential and striding patterns and\ntry to get the cache line ready before you access it. Using the\nprefetch instructions explicitly is called 'software prefetching'\n(special instructions inserted by programmers or compilers). The\ntheory here would have to be that the hardware prefetchers couldn't\npick up the pattern, but we know how to do it. The exact details of\nthe hardware prefetchers vary between chips, and there are even some\nparameters you can adjust in BIOS settings. One idea is that the\nhardware prefetchers are generally biased towards increasing\naddresses, but our tuples tend to go backwards on the page[1]. It's\npossible that some other CPUs can detect backwards strides better, but\nsince real world tuples aren't of equal size anyway, there isn't\nreally a fixed stride at all, so software prefetching seems quite\npromising for this...\n\n[1] https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-PAGE-LAYOUT-FIGURE\n\n\n",
"msg_date": "Wed, 2 Nov 2022 00:42:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, 2 Nov 2022 at 00:09, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I just have a different platforms at hand, Here is my test with\n> Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz.\n> shared_buffers has been set to big enough to hold all the data.\n\nMany thanks for testing that. Those numbers look much better than the\nones I got from my AMD machine.\n\n> By theory, Why does the preferch make thing better? I am asking this\n> because I think we need to read the data from buffer to cache line once\n> in either case (I'm obvious wrong in face of the test result.)\n\nThat's a good question. I didn't really explain that in my email.\n\nThere's quite a bit of information in [1]. My basic understanding is\nthat many modern CPU architectures are ok at \"Sequential Prefetching\"\nof cache lines from main memory when the direction is forward, but I\nbelieve that they're not very good at detecting access patterns that\nare scanning memory addresses in a backwards direction.\n\nBecause of our page layout, we have the page header followed by item\npointers at the start of the page. These item pointers are fixed with\nand point to the tuples, which are variable width. Tuples are written\nstarting at the end of the page. The page is full when the tuples\nwould overlap with the item pointers. See diagrams in [2].\n\nWe do our best to keep those tuples in reverse order of the item\npointer array. This means when we're performing a forward sequence\nscan, we're (generally) reading tuples starting at the end of the page\nand working backwards. Since the CPU is not very good at noticing\nthis and prefetching the preceding cacheline, we can make things go\nfaster (seemingly) by issuing a manual prefetch operation by way of\npg_prefetch_mem().\n\nThe key here is that accessing RAM is far slower than accessing CPU\ncaches. Modern CPUs can perform multiple operations in parallel and\nthese can be rearranged by the CPU so they're not in the same order as\nthe instructions are written in the programme. It's possible that\nhigh latency operations such as accessing RAM could hold up other\noperations which depend on the value of what's waiting to come in from\nRAM. If the CPU is held up like this, it's called a pipeline stall\n[3]. The prefetching in this case is helping to reduce the time spent\nstalled waiting for memory access.\n\nDavid\n\n[1] https://en.wikipedia.org/wiki/Cache_prefetching\nI might not do the explanation justice, but I believe many CPU archate\n[2] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/speeding-up-recovery-and-vacuum-in-postgres-14/ba-p/2234071\n[3] https://en.wikipedia.org/wiki/Pipeline_stall\n\n\n",
"msg_date": "Wed, 2 Nov 2022 00:50:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 16:52:52 +1300, David Rowley wrote:\n> As part of the AIO work [1], Andres mentioned to me that he found that\n> prefetching tuple memory during hot pruning showed significant wins.\n> I'm not proposing anything to improve HOT pruning here\n\nI did try and reproduce my old results, and it does look like we already get\nmost of the gains from prefetching via 18b87b201f7. I see gains from\nprefetching before that patch, but see it hurt after. If I reverse the\niteration order from 18b87b201f7 prefetching helps again.\n\n\n> but as a segue to get the prefetching infrastructure in so that there are\n> fewer AIO patches, I'm proposing we prefetch the next tuple during sequence\n> scans in while page mode.\n\n> Time: 328.225 ms (avg ~7.7% faster)\n> ...\n> Time: 410.843 ms (avg ~22% faster)\n\nThat's a pretty impressive result.\n\n\nI suspect that prefetching in heapgetpage() would provide gains as well, at\nleast for pages that aren't marked all-visible, pretty common in the real\nworld IME.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 20:00:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 20:00:43 -0700, Andres Freund wrote:\n> I suspect that prefetching in heapgetpage() would provide gains as well, at\n> least for pages that aren't marked all-visible, pretty common in the real\n> world IME.\n\nAttached is an experimental patch/hack for that. It ended up being more\nbeneficial to make the access ordering more optimal than prefetching the tuple\ncontents, but I'm not at all sure that's the be-all-end-all.\n\n\nI separately benchmarked pinning the CPU and memory to the same socket,\ndifferent socket and interleaving memory.\n\nI did this for HEAD, your patch, your patch and mine.\n\nBEGIN; DROP TABLE IF EXISTS large; CREATE TABLE large(a int8 not null, b int8 not null default '0', c int8); INSERT INTO large SELECT generate_series(1, 50000000);COMMIT;\n\n\nserver is started with\nlocal: numactl --membind 1 --physcpubind 10\nremote: numactl --membind 0 --physcpubind 10\ninterleave: numactl --interleave=all --physcpubind 10\n\nbenchmark stared with:\npsql -qX -f ~/tmp/prewarm.sql && \\\n pgbench -n -f ~/tmp/seqbench.sql -t 1 -r > /dev/null && \\\n perf stat -e task-clock,LLC-loads,LLC-load-misses,cycles,instructions -C\n 10 \\\n pgbench -n -f ~/tmp/seqbench.sql -t 3 -r\n\nseqbench.sql:\nSELECT count(*) FROM large WHERE c IS NOT NULL;\nSELECT sum(a), sum(b), sum(c) FROM large;\nSELECT sum(c) FROM large;\n\nbranch memory time s miss %\nhead local 31.612 74.03\ndavid local 32.034 73.54\ndavid+andres local 31.644 42.80\nandres local 30.863 48.05\n\nhead remote 33.350 72.12\ndavid remote 33.425 71.30\ndavid+andres remote 32.428 49.57\nandres remote 30.907 44.33\n\nhead interleave 32.465 71.33\ndavid interleave 33.176 72.60\ndavid+andres interleave 32.590 46.23\nandres interleave 30.440 45.13\n\nIt's cool seeing how doing optimizing heapgetpage seems to pretty much remove\nthe performance difference between local / remote memory.\n\n\nIt makes some sense that David's patch doesn't help in this case - without\nall-visible being set the tuple headers will have already been pulled in for\nthe HTSV call.\n\nI've not yet experimented with moving the prefetch for the tuple contents from\nDavid's location to before the HTSV. I suspect that might benefit both\nworkloads.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 2 Nov 2022 10:25:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 10:25:44 -0700, Andres Freund wrote:\n> server is started with\n> local: numactl --membind 1 --physcpubind 10\n> remote: numactl --membind 0 --physcpubind 10\n> interleave: numactl --interleave=all --physcpubind 10\n\nArgh, forgot to say that this is with max_parallel_workers_per_gather=0,\ns_b=8GB, huge_pages=on.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Nov 2022 13:53:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 5:17 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> My test is to run 16 queries changing the WHERE clause each time to\n> have WHERE a = 0, then WHERE a2 = 0 ... WHERE a16 = 0. I wanted to\n> know if prefetching only the first cache line of the tuple would be\n> less useful when we require evaluation of say, the \"a16\" column vs the\n> \"a\" column.\n\nI tried a similar test, but with text fields of random length, and there is\nimprovement here:\n\nIntel laptop, turbo boost off\nshared_buffers = '4GB'\nhuge_pages = 'on'\nmax_parallel_workers_per_gather = '0'\n\ncreate table text8 as\nselect\nrepeat('X', int4(random() * 20)) a1,\nrepeat('X', int4(random() * 20)) a2,\nrepeat('X', int4(random() * 20)) a3,\nrepeat('X', int4(random() * 20)) a4,\nrepeat('X', int4(random() * 20)) a5,\nrepeat('X', int4(random() * 20)) a6,\nrepeat('X', int4(random() * 20)) a7,\nrepeat('X', int4(random() * 20)) a8\nfrom generate_series(1,10000000) a;\nvacuum freeze text8;\n\npsql -c \"select pg_prewarm('text8')\" && \\\nfor i in a1 a2 a3 a4 a5 a6 a7 a8;\ndo\necho Testing $i\necho \"select * from text8 where $i = 'ZZZ';\" > bench.sql\npgbench -f bench.sql -M prepared -n -T 10 postgres | grep latency\ndone\n\n\nMaster:\n\nTesting a1\nlatency average = 980.595 ms\nTesting a2\nlatency average = 1045.081 ms\nTesting a3\nlatency average = 1107.736 ms\nTesting a4\nlatency average = 1162.188 ms\nTesting a5\nlatency average = 1213.985 ms\nTesting a6\nlatency average = 1272.156 ms\nTesting a7\nlatency average = 1318.281 ms\nTesting a8\nlatency average = 1363.359 ms\n\nPatch 0001+0003:\n\nTesting a1\nlatency average = 812.548 ms\nTesting a2\nlatency average = 897.303 ms\nTesting a3\nlatency average = 955.997 ms\nTesting a4\nlatency average = 1023.497 ms\nTesting a5\nlatency average = 1088.494 ms\nTesting a6\nlatency average = 1149.418 ms\nTesting a7\nlatency average = 1213.134 ms\nTesting a8\nlatency average = 1282.760 ms\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Nov 1, 2022 at 5:17 AM David Rowley <dgrowleyml@gmail.com> wrote:>> My test is to run 16 queries changing the WHERE clause each time to> have WHERE a = 0, then WHERE a2 = 0 ... WHERE a16 = 0. I wanted to> know if prefetching only the first cache line of the tuple would be> less useful when we require evaluation of say, the \"a16\" column vs the> \"a\" column.I tried a similar test, but with text fields of random length, and there is improvement here:Intel laptop, turbo boost offshared_buffers = '4GB'huge_pages = 'on'max_parallel_workers_per_gather = '0'create table text8 asselect\trepeat('X', int4(random() * 20)) a1,\trepeat('X', int4(random() * 20)) a2,\trepeat('X', int4(random() * 20)) a3,\trepeat('X', int4(random() * 20)) a4,\trepeat('X', int4(random() * 20)) a5,\trepeat('X', int4(random() * 20)) a6,\trepeat('X', int4(random() * 20)) a7,\trepeat('X', int4(random() * 20)) a8from generate_series(1,10000000) a;vacuum freeze text8;psql -c \"select pg_prewarm('text8')\" && \\for i in a1 a2 a3 a4 a5 a6 a7 a8;do\t\techo Testing $i\t\techo \"select * from text8 where $i = 'ZZZ';\" > bench.sql\t\tpgbench -f bench.sql -M prepared -n -T 10 postgres | grep latencydoneMaster:Testing a1latency average = 980.595 msTesting a2latency average = 1045.081 msTesting a3latency average = 1107.736 msTesting a4latency average = 1162.188 msTesting a5latency average = 1213.985 msTesting a6latency average = 1272.156 msTesting a7latency average = 1318.281 msTesting a8latency average = 1363.359 msPatch 0001+0003:Testing a1latency average = 812.548 msTesting a2latency average = 897.303 msTesting a3latency average = 955.997 msTesting a4latency average = 1023.497 msTesting a5latency average = 1088.494 msTesting a6latency average = 1149.418 msTesting a7latency average = 1213.134 msTesting a8latency average = 1282.760 ms--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Nov 2022 16:09:14 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Thu, 3 Nov 2022 at 06:25, Andres Freund <andres@anarazel.de> wrote:\n> Attached is an experimental patch/hack for that. It ended up being more\n> beneficial to make the access ordering more optimal than prefetching the tuple\n> contents, but I'm not at all sure that's the be-all-end-all.\n\nThanks for writing that patch. I've been experimenting with it.\n\nI tried unrolling the loop (patch 0003) as you mentioned in:\n\n+ * FIXME: Worth unrolling so that we don't fetch the same cacheline\n+ * over and over, due to line items being smaller than a cacheline?\n\nbut didn't see any gains from doing that.\n\nI also adjusted your patch a little so that instead of doing:\n\n- OffsetNumber rs_vistuples[MaxHeapTuplesPerPage]; /* their offsets */\n+ OffsetNumber *rs_vistuples;\n+ OffsetNumber rs_vistuples_d[MaxHeapTuplesPerPage]; /* their offsets */\n\nto work around the issue of having to populate rs_vistuples_d in\nreverse, I added a new field called rs_startindex to mark where the\nfirst element in the rs_vistuples array is. The way you wrote it seems\nto require fewer code changes, but per the FIXME comment you left, I\nget the idea you just did it the way you did to make it work enough\nfor testing.\n\nI'm quite keen to move forward in committing the 0001 patch to add the\npg_prefetch_mem macro. What I'm a little undecided about is what the\nbest patch is to commit first to make use of the new macro.\n\nI did some tests on the attached set of patches:\n\nalter system set max_parallel_workers_per_gather = 0;\nselect pg_reload_conf();\n\ncreate table t as select a from generate_series(1,10000000)a;\nalter table t set (autovacuum_enabled=false);\n\n$ cat bench.sql\nselect * from t where a = 0;\n\npsql -c \"select pg_prewarm('t');\" postgres\n\n-- Test 1 no frozen tuples in \"t\"\n\nMaster (@9c6ad5eaa):\n$ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 383.332 ms\nlatency average = 375.747 ms\nlatency average = 376.090 ms\n\nMaster + 0001 + 0002:\n$ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 370.133 ms\nlatency average = 370.149 ms\nlatency average = 370.157 ms\n\nMaster + 0001 + 0005:\n$ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 372.662 ms\nlatency average = 371.034 ms\nlatency average = 372.709 ms\n\n-- Test 2 \"select count(*) from t\" with all tuples frozen\n\n$ cat bench1.sql\nselect count(*) from t;\n\npsql -c \"vacuum freeze t;\" postgres\npsql -c \"select pg_prewarm('t');\" postgres\n\nMaster (@9c6ad5eaa):\n$ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 406.238 ms\nlatency average = 407.029 ms\nlatency average = 406.962 ms\n\nMaster + 0001 + 0005:\n$ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 345.470 ms\nlatency average = 345.775 ms\nlatency average = 345.354 ms\n\nMy current thoughts are that it might be best to go with 0005 to start\nwith. I know Melanie is working on making some changes in this area,\nso perhaps it's best to leave 0002 until that work is complete.\n\nDavid",
"msg_date": "Wed, 23 Nov 2022 10:58:07 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Thu, 3 Nov 2022 at 22:09, John Naylor <john.naylor@enterprisedb.com> wrote:\n> I tried a similar test, but with text fields of random length, and there is improvement here:\n\nThank you for testing that. Can you share which CPU this was on?\n\nMy tests were all on AMD Zen 2. I'm keen to see what the results are\non intel hardware.\n\nDavid\n\n\n",
"msg_date": "Wed, 23 Nov 2022 11:00:27 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 5:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 3 Nov 2022 at 22:09, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > I tried a similar test, but with text fields of random length, and\nthere is improvement here:\n>\n> Thank you for testing that. Can you share which CPU this was on?\n\nThat was an Intel Core i7-10750H\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Nov 23, 2022 at 5:00 AM David Rowley <dgrowleyml@gmail.com> wrote:>> On Thu, 3 Nov 2022 at 22:09, John Naylor <john.naylor@enterprisedb.com> wrote:> > I tried a similar test, but with text fields of random length, and there is improvement here:>> Thank you for testing that. Can you share which CPU this was on?That was an Intel Core i7-10750H--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 23 Nov 2022 08:35:06 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Tue, Nov 22, 2022 at 1:58 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 3 Nov 2022 at 06:25, Andres Freund <andres@anarazel.de> wrote:\n> > Attached is an experimental patch/hack for that. It ended up being more\n> > beneficial to make the access ordering more optimal than prefetching the\n> tuple\n> > contents, but I'm not at all sure that's the be-all-end-all.\n>\n> Thanks for writing that patch. I've been experimenting with it.\n>\n> I tried unrolling the loop (patch 0003) as you mentioned in:\n>\n> + * FIXME: Worth unrolling so that we don't fetch the same cacheline\n> + * over and over, due to line items being smaller than a cacheline?\n>\n> but didn't see any gains from doing that.\n>\n> I also adjusted your patch a little so that instead of doing:\n>\n> - OffsetNumber rs_vistuples[MaxHeapTuplesPerPage]; /* their offsets */\n> + OffsetNumber *rs_vistuples;\n> + OffsetNumber rs_vistuples_d[MaxHeapTuplesPerPage]; /* their offsets */\n>\n> to work around the issue of having to populate rs_vistuples_d in\n> reverse, I added a new field called rs_startindex to mark where the\n> first element in the rs_vistuples array is. The way you wrote it seems\n> to require fewer code changes, but per the FIXME comment you left, I\n> get the idea you just did it the way you did to make it work enough\n> for testing.\n>\n> I'm quite keen to move forward in committing the 0001 patch to add the\n> pg_prefetch_mem macro. What I'm a little undecided about is what the\n> best patch is to commit first to make use of the new macro.\n>\n> I did some tests on the attached set of patches:\n>\n> alter system set max_parallel_workers_per_gather = 0;\n> select pg_reload_conf();\n>\n> create table t as select a from generate_series(1,10000000)a;\n> alter table t set (autovacuum_enabled=false);\n>\n> $ cat bench.sql\n> select * from t where a = 0;\n>\n> psql -c \"select pg_prewarm('t');\" postgres\n>\n> -- Test 1 no frozen tuples in \"t\"\n>\n> Master (@9c6ad5eaa):\n> $ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 383.332 ms\n> latency average = 375.747 ms\n> latency average = 376.090 ms\n>\n> Master + 0001 + 0002:\n> $ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 370.133 ms\n> latency average = 370.149 ms\n> latency average = 370.157 ms\n>\n> Master + 0001 + 0005:\n> $ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 372.662 ms\n> latency average = 371.034 ms\n> latency average = 372.709 ms\n>\n> -- Test 2 \"select count(*) from t\" with all tuples frozen\n>\n> $ cat bench1.sql\n> select count(*) from t;\n>\n> psql -c \"vacuum freeze t;\" postgres\n> psql -c \"select pg_prewarm('t');\" postgres\n>\n> Master (@9c6ad5eaa):\n> $ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 406.238 ms\n> latency average = 407.029 ms\n> latency average = 406.962 ms\n>\n> Master + 0001 + 0005:\n> $ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 345.470 ms\n> latency average = 345.775 ms\n> latency average = 345.354 ms\n>\n> My current thoughts are that it might be best to go with 0005 to start\n> with. I know Melanie is working on making some changes in this area,\n> so perhaps it's best to leave 0002 until that work is complete.\n>\n\nI ran your test1 exactly like your setup except the row count is 3000000\n(with 13275 blocks). Shared_buffers is 128MB and the hardware configuration\ndetails at the bottom of the mail. It appears *Master + 0001 + 0005 *regressed\ncompared to master slightly .\n\n*Master (@56d0ed3b756b2e3799a7bbc0ac89bc7657ca2c33)*\n\nBefore vacuum:\n/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10\npostgres | grep -E \"^latency\"\nlatency average = 430.287 ms\n\nAfter Vacuum:\n/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10\npostgres | grep -E \"^latency\"\nlatency average = 369.046 ms\n\n*Master + 0001 + 0002:*\n\nBefore vacuum:\n/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10\npostgres | grep -E \"^latency\"\nlatency average = 427.983 ms\n\nAfter Vacuum:\n/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10\npostgres | grep -E \"^latency\"\nlatency average = 367.185 ms\n\n*Master + 0001 + 0005:*\n\nBefore vacuum:\n/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10\npostgres | grep -E \"^latency\"\nlatency average = 447.045 ms\n\nAfter Vacuum:\n/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10\npostgres | grep -E \"^latency\"\nlatency average = 374.484 ms\n\nlscpu output\n\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 46 bits physical, 48 bits virtual\nCPU(s): 1\nOn-line CPU(s) list: 0\nThread(s) per core: 1\nCore(s) per socket: 1\nSocket(s): 1\nNUMA node(s): 1\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 63\nModel name: Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz\nStepping: 2\nCPU MHz: 2397.224\nBogoMIPS: 4794.44\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 32 KiB\nL1i cache: 32 KiB\nL2 cache: 256 KiB\nL3 cache: 30 MiB\nNUMA node0 CPU(s): 0\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\nVulnerability L1tf: Mitigation; PTE Inversion\nVulnerability Mds: Mitigation; Clear CPU buffers; SMT Host\nstate unknown\nVulnerability Meltdown: Mitigation; PTI\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted,\nno microcode; SMT Host state unknown\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and\n__user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled,\nRSB filling\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nFlags: fpu vme de pse tsc msr pae mce cx8 apic\nsep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx\npdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni\n pclmulqdq ssse3 fma cx16 pcid sse4_1\nsse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm\ninvpcid_single pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt m\n d_clear\n\n\n>\n> David\n>\n\nOn Tue, Nov 22, 2022 at 1:58 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 3 Nov 2022 at 06:25, Andres Freund <andres@anarazel.de> wrote:\n> Attached is an experimental patch/hack for that. It ended up being more\n> beneficial to make the access ordering more optimal than prefetching the tuple\n> contents, but I'm not at all sure that's the be-all-end-all.\n\nThanks for writing that patch. I've been experimenting with it.\n\nI tried unrolling the loop (patch 0003) as you mentioned in:\n\n+ * FIXME: Worth unrolling so that we don't fetch the same cacheline\n+ * over and over, due to line items being smaller than a cacheline?\n\nbut didn't see any gains from doing that.\n\nI also adjusted your patch a little so that instead of doing:\n\n- OffsetNumber rs_vistuples[MaxHeapTuplesPerPage]; /* their offsets */\n+ OffsetNumber *rs_vistuples;\n+ OffsetNumber rs_vistuples_d[MaxHeapTuplesPerPage]; /* their offsets */\n\nto work around the issue of having to populate rs_vistuples_d in\nreverse, I added a new field called rs_startindex to mark where the\nfirst element in the rs_vistuples array is. The way you wrote it seems\nto require fewer code changes, but per the FIXME comment you left, I\nget the idea you just did it the way you did to make it work enough\nfor testing.\n\nI'm quite keen to move forward in committing the 0001 patch to add the\npg_prefetch_mem macro. What I'm a little undecided about is what the\nbest patch is to commit first to make use of the new macro.\n\nI did some tests on the attached set of patches:\n\nalter system set max_parallel_workers_per_gather = 0;\nselect pg_reload_conf();\n\ncreate table t as select a from generate_series(1,10000000)a;\nalter table t set (autovacuum_enabled=false);\n\n$ cat bench.sql\nselect * from t where a = 0;\n\npsql -c \"select pg_prewarm('t');\" postgres\n\n-- Test 1 no frozen tuples in \"t\"\n\nMaster (@9c6ad5eaa):\n$ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 383.332 ms\nlatency average = 375.747 ms\nlatency average = 376.090 ms\n\nMaster + 0001 + 0002:\n$ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 370.133 ms\nlatency average = 370.149 ms\nlatency average = 370.157 ms\n\nMaster + 0001 + 0005:\n$ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 372.662 ms\nlatency average = 371.034 ms\nlatency average = 372.709 ms\n\n-- Test 2 \"select count(*) from t\" with all tuples frozen\n\n$ cat bench1.sql\nselect count(*) from t;\n\npsql -c \"vacuum freeze t;\" postgres\npsql -c \"select pg_prewarm('t');\" postgres\n\nMaster (@9c6ad5eaa):\n$ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 406.238 ms\nlatency average = 407.029 ms\nlatency average = 406.962 ms\n\nMaster + 0001 + 0005:\n$ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\nlatency average = 345.470 ms\nlatency average = 345.775 ms\nlatency average = 345.354 ms\n\nMy current thoughts are that it might be best to go with 0005 to start\nwith. I know Melanie is working on making some changes in this area,\nso perhaps it's best to leave 0002 until that work is complete.I ran your test1 exactly like your setup except the row count is 3000000 (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration details at the bottom of the mail. It appears Master + 0001 + 0005 regressed compared to master slightly .Master (@56d0ed3b756b2e3799a7bbc0ac89bc7657ca2c33)Before vacuum:/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10 postgres | grep -E \"^latency\"latency average = 430.287 msAfter Vacuum:/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10 postgres | grep -E \"^latency\"latency average = 369.046 msMaster + 0001 + 0002:Before vacuum:/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10 postgres | grep -E \"^latency\"latency average = 427.983 msAfter Vacuum:/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10 postgres | grep -E \"^latency\"latency average = 367.185 msMaster + 0001 + 0005:Before vacuum:/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10 postgres | grep -E \"^latency\"latency average = 447.045 msAfter Vacuum:/usr/local/pgsql/bin/pgbench -n -f bench.sql -M prepared -T 30 -P 10 postgres | grep -E \"^latency\"latency average = 374.484 mslscpu outputArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianAddress sizes: 46 bits physical, 48 bits virtualCPU(s): 1On-line CPU(s) list: 0Thread(s) per core: 1Core(s) per socket: 1Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 63Model name: Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHzStepping: 2CPU MHz: 2397.224BogoMIPS: 4794.44Hypervisor vendor: MicrosoftVirtualization type: fullL1d cache: 32 KiBL1i cache: 32 KiBL2 cache: 256 KiBL3 cache: 30 MiBNUMA node0 CPU(s): 0Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupportedVulnerability L1tf: Mitigation; PTE InversionVulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknownVulnerability Meltdown: Mitigation; PTIVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknownVulnerability Spec store bypass: VulnerableVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitizationVulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB fillingVulnerability Srbds: Not affectedVulnerability Tsx async abort: Not affectedFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt m d_clear \n\nDavid",
"msg_date": "Tue, 22 Nov 2022 23:29:47 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, 23 Nov 2022 at 20:29, sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n> I ran your test1 exactly like your setup except the row count is 3000000 (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration details at the bottom of the mail. It appears Master + 0001 + 0005 regressed compared to master slightly .\n\nThank you for running these tests.\n\nCan you share if the plans used for these queries was a parallel plan?\nI had set max_parallel_workers_per_gather to 0 to remove the\nadditional variability from parallel query.\n\nAlso, 13275 blocks is 104MBs, does EXPLAIN (ANALYZE, BUFFERS) indicate\nthat all pages were in shared buffers? I used pg_prewarm() to ensure\nthey were so that the runs were consistent.\n\nDavid\n\n\n",
"msg_date": "Wed, 23 Nov 2022 20:44:04 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Tue, Nov 22, 2022 at 11:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 23 Nov 2022 at 20:29, sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> > I ran your test1 exactly like your setup except the row count is 3000000\n> (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration\n> details at the bottom of the mail. It appears Master + 0001 + 0005\n> regressed compared to master slightly .\n>\n> Thank you for running these tests.\n>\n> Can you share if the plans used for these queries was a parallel plan?\n> I had set max_parallel_workers_per_gather to 0 to remove the\n> additional variability from parallel query.\n>\n> Also, 13275 blocks is 104MBs, does EXPLAIN (ANALYZE, BUFFERS) indicate\n> that all pages were in shared buffers? I used pg_prewarm() to ensure\n> they were so that the runs were consistent.\n>\n\nI reran the test with setting max_parallel_workers_per_gather = 0 and with\npg_prewarm. Appears I missed some step while testing on the master, thanks\nfor sharing the details. New numbers show master has higher latency\nthan *Master +\n0001 + 0005*.\n\n*Master*\n\nBefore vacuum:\nlatency average = 452.881 ms\n\nAfter vacuum:\nlatency average = 393.880 ms\n\n*Master + 0001 + 0005*\nBefore vacuum:\nlatency average = 441.832 ms\n\nAfter vacuum:\nlatency average = 369.591 ms\n\nOn Tue, Nov 22, 2022 at 11:44 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 23 Nov 2022 at 20:29, sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n> I ran your test1 exactly like your setup except the row count is 3000000 (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration details at the bottom of the mail. It appears Master + 0001 + 0005 regressed compared to master slightly .\n\nThank you for running these tests.\n\nCan you share if the plans used for these queries was a parallel plan?\nI had set max_parallel_workers_per_gather to 0 to remove the\nadditional variability from parallel query.\n\nAlso, 13275 blocks is 104MBs, does EXPLAIN (ANALYZE, BUFFERS) indicate\nthat all pages were in shared buffers? I used pg_prewarm() to ensure\nthey were so that the runs were consistent.I reran the test with setting max_parallel_workers_per_gather = 0 and with pg_prewarm. Appears I missed some step while testing on the master, thanks for sharing the details. New numbers show master has higher latency than Master + 0001 + 0005.MasterBefore vacuum:latency average = 452.881 msAfter vacuum:latency average = 393.880 msMaster + 0001 + 0005Before vacuum:latency average = 441.832 msAfter vacuum:latency average = 369.591 ms",
"msg_date": "Wed, 23 Nov 2022 00:27:08 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, 23 Nov 2022 at 21:26, sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n> Master\n> After vacuum:\n> latency average = 393.880 ms\n>\n> Master + 0001 + 0005\n> After vacuum:\n> latency average = 369.591 ms\n\nThank you for running those again. Those results make more sense.\nWould you mind also testing the count(*) query too?\n\nDavid\n\n\n",
"msg_date": "Thu, 24 Nov 2022 02:15:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 12:42:11AM +1300, Thomas Munro wrote:\n> On Wed, Nov 2, 2022 at 12:09 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > By theory, Why does the preferch make thing better? I am asking this\n> > because I think we need to read the data from buffer to cache line once\n> > in either case (I'm obvious wrong in face of the test result.)\n> \n> CPUs have several different kinds of 'hardware prefetchers' (worth\n> reading about), that look out for sequential and striding patterns and\n> try to get the cache line ready before you access it. Using the\n> prefetch instructions explicitly is called 'software prefetching'\n> (special instructions inserted by programmers or compilers). The\n> theory here would have to be that the hardware prefetchers couldn't\n> pick up the pattern, but we know how to do it. The exact details of\n> the hardware prefetchers vary between chips, and there are even some\n> parameters you can adjust in BIOS settings. One idea is that the\n> hardware prefetchers are generally biased towards increasing\n> addresses, but our tuples tend to go backwards on the page[1]. It's\n> possible that some other CPUs can detect backwards strides better, but\n> since real world tuples aren't of equal size anyway, there isn't\n> really a fixed stride at all, so software prefetching seems quite\n> promising for this...\n> \n> [1] https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-PAGE-LAYOUT-FIGURE\n\nI remember someone showing that having our item pointers at the _end_ of\nthe page and tuples at the start moving toward the end increased\nperformance significantly.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 23 Nov 2022 11:03:22 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 11:03:22AM -0500, Bruce Momjian wrote:\n> > CPUs have several different kinds of 'hardware prefetchers' (worth\n> > reading about), that look out for sequential and striding patterns and\n> > try to get the cache line ready before you access it. Using the\n> > prefetch instructions explicitly is called 'software prefetching'\n> > (special instructions inserted by programmers or compilers). The\n> > theory here would have to be that the hardware prefetchers couldn't\n> > pick up the pattern, but we know how to do it. The exact details of\n> > the hardware prefetchers vary between chips, and there are even some\n> > parameters you can adjust in BIOS settings. One idea is that the\n> > hardware prefetchers are generally biased towards increasing\n> > addresses, but our tuples tend to go backwards on the page[1]. It's\n> > possible that some other CPUs can detect backwards strides better, but\n> > since real world tuples aren't of equal size anyway, there isn't\n> > really a fixed stride at all, so software prefetching seems quite\n> > promising for this...\n> > \n> > [1] https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-PAGE-LAYOUT-FIGURE\n> \n> I remember someone showing that having our item pointers at the _end_ of\n> the page and tuples at the start moving toward the end increased\n> performance significantly.\n\nAh, I found it, from 2017, with a 15-25% slowdown:\n\n\thttps://www.postgresql.org/message-id/20171108205943.tps27i2tujsstrg7%40alap3.anarazel.de\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 23 Nov 2022 11:14:51 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, 23 Nov 2022 at 10:58, David Rowley <dgrowleyml@gmail.com> wrote:\n> My current thoughts are that it might be best to go with 0005 to start\n> with. I know Melanie is working on making some changes in this area,\n> so perhaps it's best to leave 0002 until that work is complete.\n\nI tried running TPC-H @ scale 5 with master (@d09dbeb9) vs master +\n0001 + 0005 patch. The results look quite promising. Query 15 seems\nto run 15% faster and overall it's 4.23% faster.\n\nFull results are attached.\n\nDavid",
"msg_date": "Thu, 24 Nov 2022 22:25:09 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 4:58 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> My current thoughts are that it might be best to go with 0005 to start\n> with.\n\n+1\n\n> I know Melanie is working on making some changes in this area,\n> so perhaps it's best to leave 0002 until that work is complete.\n\nThere seem to be some open questions about that one as well.\n\nI reran the same test in [1] (except I don't have the ability to lock clock\nspeed or affect huge pages) on an older CPU from 2014 (Intel(R) Xeon(R) CPU\nE5-2695 v3 @ 2.30GHz, kernel 3.10 gcc 4.8) with good results:\n\nHEAD:\n\nTesting a1\nlatency average = 965.462 ms\nTesting a2\nlatency average = 1054.608 ms\nTesting a3\nlatency average = 1078.263 ms\nTesting a4\nlatency average = 1120.933 ms\nTesting a5\nlatency average = 1162.753 ms\nTesting a6\nlatency average = 1298.876 ms\nTesting a7\nlatency average = 1228.775 ms\nTesting a8\nlatency average = 1293.535 ms\n\n0001+0005:\n\nTesting a1\nlatency average = 791.224 ms\nTesting a2\nlatency average = 876.421 ms\nTesting a3\nlatency average = 911.039 ms\nTesting a4\nlatency average = 981.693 ms\nTesting a5\nlatency average = 998.176 ms\nTesting a6\nlatency average = 979.954 ms\nTesting a7\nlatency average = 1066.523 ms\nTesting a8\nlatency average = 1030.235 ms\n\nI then tested a Power8 machine (also kernel 3.10 gcc 4.8). Configure\nreports \"checking for __builtin_prefetch... yes\", but I don't think it does\nanything here, as the results are within noise level. A quick search didn't\nturn up anything informative on this platform, and I'm not motivated to dig\ndeeper. In any case, it doesn't make things worse.\n\nHEAD:\n\nTesting a1\nlatency average = 1402.163 ms\nTesting a2\nlatency average = 1442.971 ms\nTesting a3\nlatency average = 1599.188 ms\nTesting a4\nlatency average = 1664.397 ms\nTesting a5\nlatency average = 1782.091 ms\nTesting a6\nlatency average = 1860.655 ms\nTesting a7\nlatency average = 1929.120 ms\nTesting a8\nlatency average = 2021.100 ms\n\n0001+0005:\n\nTesting a1\nlatency average = 1433.080 ms\nTesting a2\nlatency average = 1428.369 ms\nTesting a3\nlatency average = 1542.406 ms\nTesting a4\nlatency average = 1642.452 ms\nTesting a5\nlatency average = 1737.173 ms\nTesting a6\nlatency average = 1828.239 ms\nTesting a7\nlatency average = 1920.909 ms\nTesting a8\nlatency average = 2036.922 ms\n\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsHqmH_S%3D4apc5agKsJsF6xZ9f6NaH0Z83jUYv3EgySHfw%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Nov 23, 2022 at 4:58 AM David Rowley <dgrowleyml@gmail.com> wrote:> My current thoughts are that it might be best to go with 0005 to start> with.+1> I know Melanie is working on making some changes in this area,> so perhaps it's best to leave 0002 until that work is complete.There seem to be some open questions about that one as well.I reran the same test in [1] (except I don't have the ability to lock clock speed or affect huge pages) on an older CPU from 2014 (Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz, kernel 3.10 gcc 4.8) with good results:HEAD:Testing a1latency average = 965.462 msTesting a2latency average = 1054.608 msTesting a3latency average = 1078.263 msTesting a4latency average = 1120.933 msTesting a5latency average = 1162.753 msTesting a6latency average = 1298.876 msTesting a7latency average = 1228.775 msTesting a8latency average = 1293.535 ms0001+0005:Testing a1latency average = 791.224 msTesting a2latency average = 876.421 msTesting a3latency average = 911.039 msTesting a4latency average = 981.693 msTesting a5latency average = 998.176 msTesting a6latency average = 979.954 msTesting a7latency average = 1066.523 msTesting a8latency average = 1030.235 msI then tested a Power8 machine (also kernel 3.10 gcc 4.8). Configure reports \"checking for __builtin_prefetch... yes\", but I don't think it does anything here, as the results are within noise level. A quick search didn't turn up anything informative on this platform, and I'm not motivated to dig deeper. In any case, it doesn't make things worse.HEAD:Testing a1latency average = 1402.163 msTesting a2latency average = 1442.971 msTesting a3latency average = 1599.188 msTesting a4latency average = 1664.397 msTesting a5latency average = 1782.091 msTesting a6latency average = 1860.655 msTesting a7latency average = 1929.120 msTesting a8latency average = 2021.100 ms0001+0005:Testing a1latency average = 1433.080 msTesting a2latency average = 1428.369 msTesting a3latency average = 1542.406 msTesting a4latency average = 1642.452 msTesting a5latency average = 1737.173 msTesting a6latency average = 1828.239 msTesting a7latency average = 1920.909 msTesting a8latency average = 2036.922 ms[1] https://www.postgresql.org/message-id/CAFBsxsHqmH_S%3D4apc5agKsJsF6xZ9f6NaH0Z83jUYv3EgySHfw%40mail.gmail.com--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Dec 2022 12:17:49 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 18:18, John Naylor <john.naylor@enterprisedb.com> wrote:\n> I then tested a Power8 machine (also kernel 3.10 gcc 4.8). Configure reports \"checking for __builtin_prefetch... yes\", but I don't think it does anything here, as the results are within noise level. A quick search didn't turn up anything informative on this platform, and I'm not motivated to dig deeper. In any case, it doesn't make things worse.\n\nThanks for testing the power8 hardware.\n\nAndres just let me test on some Apple M1 hardware (those cores are\ninsanely fast!)\n\nUsing the table and running the script from [1], with trimmed-down\noutput, I see:\n\nMaster @ edf12e7bbd\n\nTesting a -> 158.037 ms\nTesting a2 -> 164.442 ms\nTesting a3 -> 171.523 ms\nTesting a4 -> 189.892 ms\nTesting a5 -> 217.197 ms\nTesting a6 -> 186.790 ms\nTesting a7 -> 189.491 ms\nTesting a8 -> 195.384 ms\nTesting a9 -> 200.547 ms\nTesting a10 -> 206.149 ms\nTesting a11 -> 211.708 ms\nTesting a12 -> 217.976 ms\nTesting a13 -> 224.565 ms\nTesting a14 -> 230.642 ms\nTesting a15 -> 237.372 ms\nTesting a16 -> 244.110 ms\n\n(checking for __builtin_prefetch... yes)\n\nMaster + v2-0001 + v2-0005\n\nTesting a -> 157.477 ms\nTesting a2 -> 163.720 ms\nTesting a3 -> 171.159 ms\nTesting a4 -> 186.837 ms\nTesting a5 -> 205.220 ms\nTesting a6 -> 184.585 ms\nTesting a7 -> 189.879 ms\nTesting a8 -> 195.650 ms\nTesting a9 -> 201.220 ms\nTesting a10 -> 207.162 ms\nTesting a11 -> 213.255 ms\nTesting a12 -> 219.313 ms\nTesting a13 -> 225.763 ms\nTesting a14 -> 237.337 ms\nTesting a15 -> 239.440 ms\nTesting a16 -> 245.740 ms\n\nIt does not seem like there's any improvement on this architecture.\nThere is a very small increase from \"a\" to \"a6\", but a very small\ndecrease in performance from \"a7\" to \"a16\". It's likely within the\nexpected noise level.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvqWexy_6jGmB39Vr3OqxZ_w6stAFkq52hODvwaW-19aiA@mail.gmail.com\n\n\n",
"msg_date": "Fri, 2 Dec 2022 14:47:55 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, 23 Nov 2022 at 03:28, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 3 Nov 2022 at 06:25, Andres Freund <andres@anarazel.de> wrote:\n> > Attached is an experimental patch/hack for that. It ended up being more\n> > beneficial to make the access ordering more optimal than prefetching the tuple\n> > contents, but I'm not at all sure that's the be-all-end-all.\n>\n> Thanks for writing that patch. I've been experimenting with it.\n>\n> I tried unrolling the loop (patch 0003) as you mentioned in:\n>\n> + * FIXME: Worth unrolling so that we don't fetch the same cacheline\n> + * over and over, due to line items being smaller than a cacheline?\n>\n> but didn't see any gains from doing that.\n>\n> I also adjusted your patch a little so that instead of doing:\n>\n> - OffsetNumber rs_vistuples[MaxHeapTuplesPerPage]; /* their offsets */\n> + OffsetNumber *rs_vistuples;\n> + OffsetNumber rs_vistuples_d[MaxHeapTuplesPerPage]; /* their offsets */\n>\n> to work around the issue of having to populate rs_vistuples_d in\n> reverse, I added a new field called rs_startindex to mark where the\n> first element in the rs_vistuples array is. The way you wrote it seems\n> to require fewer code changes, but per the FIXME comment you left, I\n> get the idea you just did it the way you did to make it work enough\n> for testing.\n>\n> I'm quite keen to move forward in committing the 0001 patch to add the\n> pg_prefetch_mem macro. What I'm a little undecided about is what the\n> best patch is to commit first to make use of the new macro.\n>\n> I did some tests on the attached set of patches:\n>\n> alter system set max_parallel_workers_per_gather = 0;\n> select pg_reload_conf();\n>\n> create table t as select a from generate_series(1,10000000)a;\n> alter table t set (autovacuum_enabled=false);\n>\n> $ cat bench.sql\n> select * from t where a = 0;\n>\n> psql -c \"select pg_prewarm('t');\" postgres\n>\n> -- Test 1 no frozen tuples in \"t\"\n>\n> Master (@9c6ad5eaa):\n> $ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 383.332 ms\n> latency average = 375.747 ms\n> latency average = 376.090 ms\n>\n> Master + 0001 + 0002:\n> $ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 370.133 ms\n> latency average = 370.149 ms\n> latency average = 370.157 ms\n>\n> Master + 0001 + 0005:\n> $ pgbench -n -f bench.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 372.662 ms\n> latency average = 371.034 ms\n> latency average = 372.709 ms\n>\n> -- Test 2 \"select count(*) from t\" with all tuples frozen\n>\n> $ cat bench1.sql\n> select count(*) from t;\n>\n> psql -c \"vacuum freeze t;\" postgres\n> psql -c \"select pg_prewarm('t');\" postgres\n>\n> Master (@9c6ad5eaa):\n> $ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 406.238 ms\n> latency average = 407.029 ms\n> latency average = 406.962 ms\n>\n> Master + 0001 + 0005:\n> $ pgbench -n -f bench1.sql -M prepared -T 10 postgres | grep -E \"^latency\"\n> latency average = 345.470 ms\n> latency average = 345.775 ms\n> latency average = 345.354 ms\n>\n> My current thoughts are that it might be best to go with 0005 to start\n> with. I know Melanie is working on making some changes in this area,\n> so perhaps it's best to leave 0002 until that work is complete.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n5212d447fa53518458cbe609092b347803a667c5 ===\n=== applying patch ./v2-0001-Add-pg_prefetch_mem-macro-to-load-cache-lines.patch\n=== applying patch ./v2-0002-Perform-memory-prefetching-in-heapgetpage.patch\npatching file src/backend/access/heap/heapam.c\nHunk #1 FAILED at 451.\n1 out of 6 hunks FAILED -- saving rejects to file\nsrc/backend/access/heap/heapam.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3978.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:36:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 23:06, vignesh C <vignesh21@gmail.com> wrote:\n> patching file src/backend/access/heap/heapam.c\n> Hunk #1 FAILED at 451.\n> 1 out of 6 hunks FAILED -- saving rejects to file\n> src/backend/access/heap/heapam.c.rej\n\nI've moved this patch to the next CF. This patch has a dependency on\nwhat's being proposed in [1]. I'd rather wait until that goes in\nbefore rebasing this. Having this go in first will just make Melanie's\njob harder on her heapam.c refactoring work.\n\nDavid\n\n[1] https://commitfest.postgresql.org/41/3987/\n\n\n",
"msg_date": "Mon, 30 Jan 2023 15:23:47 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Sun, 29 Jan 2023 at 21:24, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I've moved this patch to the next CF. This patch has a dependency on\n> what's being proposed in [1].\n\nThe referenced patch was committed March 19th but there's been no\ncomment here. Is this patch likely to go ahead this release or should\nI move it forward again?\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 3 Apr 2023 15:47:20 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Tue, 4 Apr 2023 at 07:47, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> The referenced patch was committed March 19th but there's been no\n> comment here. Is this patch likely to go ahead this release or should\n> I move it forward again?\n\nThanks for the reminder on this.\n\nI have done some work on it but just didn't post it here as I didn't\nhave good news. The problem I'm facing is that after Melanie's recent\nrefactor work done around heapgettup() [1], I can no longer get the\nsame speedup as before with the pg_prefetch_mem(). While testing\nMelanie's patches, I did do some performance tests and did see a good\nincrease in performance from it. I really don't know the reason why\nthe prefetching does not show the gains as it did before. Perhaps the\nrearranged code is better able to perform hardware prefetching of\ncache lines.\n\nI am, however, inclined not to drop the pg_prefetch_mem() macro\naltogether just because I can no longer demonstrate any performance\ngains during sequential scans, so I decided to go and try what Thomas\nmentioned in [2] to use the prefetching macro to fetch the required\ntuples in PageRepairFragmentation() so that they're cached in CPU\ncache by the time we get to compactify_tuples().\n\nI tried this using the same test as I described in [3] after adjusting\nthe following line to use PANIC instead of LOG:\n\nereport(LOG,\n (errmsg(\"redo done at %X/%X system usage: %s\",\n LSN_FORMAT_ARGS(xlogreader->ReadRecPtr),\n pg_rusage_show(&ru0))));\n\ndoing that allows me to repeat the test using the same WAL each time.\n\namd3990x CPU on Ubuntu 22.10 with 64GB RAM.\n\nshared_buffers = 10GB\ncheckpoint_timeout = '1 h'\nmax_wal_size = 100GB\nmax_connections = 300\n\nMaster:\n\n2023-04-04 15:54:55.635 NZST [15958] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 44.46 s, system: 0.97 s, elapsed: 45.45 s\n2023-04-04 15:56:33.380 NZST [16109] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 43.80 s, system: 0.86 s, elapsed: 44.69 s\n2023-04-04 15:57:25.968 NZST [16134] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 44.08 s, system: 0.74 s, elapsed: 44.84 s\n2023-04-04 15:58:53.820 NZST [16158] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 44.20 s, system: 0.72 s, elapsed: 44.94 s\n\nPrefetch Memory in PageRepairFragmentation():\n\n2023-04-04 16:03:16.296 NZST [25921] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 41.73 s, system: 0.77 s, elapsed: 42.52 s\n2023-04-04 16:04:07.384 NZST [25945] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 40.87 s, system: 0.86 s, elapsed: 41.74 s\n2023-04-04 16:05:01.090 NZST [25968] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 41.20 s, system: 0.72 s, elapsed: 41.94 s\n2023-04-04 16:05:49.235 NZST [25996] PANIC: redo done at 0/DC447610\nsystem usage: CPU: user: 41.56 s, system: 0.66 s, elapsed: 42.24 s\n\nAbout 6.7% performance increase over master.\n\nI wonder since I really just did the seqscan patch as a means to get\nthe pg_prefetch_mem() patch in, I wonder if it's ok to scrap that in\nfavour of the PageRepairFragmentation patch.\n\nUpdated patches attached.\n\nDavid\n\n[1] https://postgr.es/m/CAAKRu_YSOnhKsDyFcqJsKtBSrd32DP-jjXmv7hL0BPD-z0TGXQ%40mail.gmail.com\n[2] https://postgr.es/m/CA%2BhUKGJRtzbbhVmb83vbCiMRZ4piOAi7HWLCqs%3DGQ74mUPrP_w%40mail.gmail.com\n[3] https://postgr.es/m/CAApHDvoKwqAzhiuxEt8jSquPJKDpH8DNUZDFUSX9P7DXrJdc3Q%40mail.gmail.com",
"msg_date": "Tue, 4 Apr 2023 16:50:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "> On 4 Apr 2023, at 06:50, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Updated patches attached.\n\nThis patch is marked Waiting on Author, but from the thread it seems Needs\nReview is more apt. I've changed status and also attached a new version of the\npatch as the posted v1 no longer applied due to changes in formatting for Perl\ncode.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 10 Jul 2023 11:32:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "> On 10 Jul 2023, at 11:32, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 4 Apr 2023, at 06:50, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n>> Updated patches attached.\n> \n> This patch is marked Waiting on Author, but from the thread it seems Needs\n> Review is more apt. I've changed status and also attached a new version of the\n> patch as the posted v1 no longer applied due to changes in formatting for Perl\n> code.\n\n..and again with both patches attached. Doh.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 10 Jul 2023 11:34:38 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Mon, 10 Jul 2023 at 15:04, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 10 Jul 2023, at 11:32, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> >> On 4 Apr 2023, at 06:50, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> >> Updated patches attached.\n> >\n> > This patch is marked Waiting on Author, but from the thread it seems Needs\n> > Review is more apt. I've changed status and also attached a new version of the\n> > patch as the posted v1 no longer applied due to changes in formatting for Perl\n> > code.\n>\n> ..and again with both patches attached. Doh.\n\nI'm seeing that there has been no activity in this thread for more\nthan 6 months, I'm planning to close this in the current commitfest\nunless someone is planning to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 09:05:15 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
},
{
"msg_contents": "On Sat, 20 Jan 2024 at 16:35, vignesh C <vignesh21@gmail.com> wrote:\n> I'm seeing that there has been no activity in this thread for more\n> than 6 months, I'm planning to close this in the current commitfest\n> unless someone is planning to take it forward.\n\nThanks for the reminder about this. Since the\nheapgettup/heapgettup_pagemode refactor I was unable to see the same\nperformance gains as I was before.\n\nAlso, since reading \"The Art of Writing Efficient Programs\" I'm led to\nbelieve that modern processor hardware prefetchers can detect and\nprefetch on both forward and backward access patterns. I also saw some\ndiscussion on twitter about this [1].\n\nI'm not sure yet how this translates to non-uniform access patterns,\ne.g. tuples are varying cachelines apart and we do something like only\ndeform attributes in the first cacheline. Will the prefetcher still\nsee the pattern in this case? If it's non-uniform, then how does it\nknow which cacheline to fetch? If the tuple spans multiple cacheline\nand we deform the whole tuple, will accessing the next cacheline in a\nforward direction make the hardware prefetcher forget about the more\ngeneral backward access that's going on?\n\nThese are questions I'll need to learn the answers to before I can\nunderstand what's the best thing to do in this area. The only way to\ntell is to design a benchmark and see how far we can go before the\nhardware prefetcher no longer detects the pattern.\n\nI've withdrawn the patch. I can resubmit once I've done some more\nexperimentation if that experimentation yields positive results.\n\nDavid\n\n[1] https://twitter.com/ID_AA_Carmack/status/1470832912149135360\n\n\n",
"msg_date": "Mon, 22 Jan 2024 23:15:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch the next tuple's memory during seqscans"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am trying to determine optimal standby for automatic failover based on\npg_last_wal_recieve_lsn() value of all slaves.\n\nBut what if max_wal_size is reached, is LSN value reset to zero.\n\nPlease share some documentation that mentions clarification.\n\nAlso suggest if there is any better approach to get node with best RPO out\nof all slaves.\n\nThanks & regards,\nShubham\n\nHi all, I am trying to determine optimal standby for automatic failover based on pg_last_wal_recieve_lsn() value of all slaves.But what if max_wal_size is reached, is LSN value reset to zero.Please share some documentation that mentions clarification.Also suggest if there is any better approach to get node with best RPO out of all slaves.Thanks & regards,Shubham",
"msg_date": "Mon, 31 Oct 2022 11:03:59 +0530",
"msg_from": "Shubham Shingne <shubham.s.shingne@gmail.com>",
"msg_from_op": true,
"msg_subject": "Limit of WAL LSN value of type pg_lsn"
},
{
"msg_contents": "Hi Shubham,\n\nOn Mon, Oct 31, 2022 at 11:04 AM Shubham Shingne\n<shubham.s.shingne@gmail.com> wrote:\n>\n> Hi all,\n>\n> I am trying to determine optimal standby for automatic failover based on pg_last_wal_recieve_lsn() value of all slaves.\n>\n> But what if max_wal_size is reached, is LSN value reset to zero.\n\nPer https://www.postgresql.org/docs/current/runtime-config-wal.html\nmax_wal_size affects how frequently checkpoints are run. It doesn't\nspecify the limit on WAL produced by a given server. Probably\nmax_wal_size is a misnomer.\n\nLSN is never reset to zero AFAIU.\n\n>\n> Please share some documentation that mentions clarification.\n>\n> Also suggest if there is any better approach to get node with best RPO out of all slaves.\n\nIt depends upon the method used to maintain slaves. Googling \"RPO\npostgresql\" provides a lot of links. I found\nhttp://www.postgresql-blog.com/postgresql-backup-strategy-recovery-pitr-wal/\nto be a good start. But there are many others which seem to be useful\ntoo.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 31 Oct 2022 18:25:13 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Limit of WAL LSN value of type pg_lsn"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs per the world clock, the next commit fest will begin in 30 hours\n(11/1 0:00 AoE time). I may have missed something, but it looks like\nwe have no CFM for this one yet.\n\nOpinions, thoughts or volunteers?\n--\nMichael",
"msg_date": "Mon, 31 Oct 2022 14:42:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Commit fest 2022-11"
},
{
"msg_contents": "2022年10月31日(月) 14:42 Michael Paquier <michael@paquier.xyz>:\n>\n> Hi all,\n>\n> As per the world clock, the next commit fest will begin in 30 hours\n> (11/1 0:00 AoE time). I may have missed something, but it looks like\n> we have no CFM for this one yet.\n\n** tumbleweed **\n\n> Opinions, thoughts or volunteers?\n\nThis is on my bucket list of things to do some day, so I guess now is as bad a\ntime as any :). Caveat is that this will have to be a personal\nfree-time project,\nso would be good if someone else is around as well.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 1 Nov 2022 10:01:05 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 6:31 AM Ian Lawrence Barwick <barwick@gmail.com>\nwrote:\n\n> 2022年10月31日(月) 14:42 Michael Paquier <michael@paquier.xyz>:\n> >\n> > Hi all,\n> >\n> > As per the world clock, the next commit fest will begin in 30 hours\n> > (11/1 0:00 AoE time). I may have missed something, but it looks like\n> > we have no CFM for this one yet.\n>\n> ** tumbleweed **\n>\n> > Opinions, thoughts or volunteers?\n>\n> This is on my bucket list of things to do some day, so I guess now is as\n> bad a\n> time as any :). Caveat is that this will have to be a personal\n> free-time project,\n> so would be good if someone else is around as well.\n>\n> Regards\n>\n> Ian Barwick\n>\n>\n>\nI am free. I can help.\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\nOn Tue, Nov 1, 2022 at 6:31 AM Ian Lawrence Barwick <barwick@gmail.com> wrote:2022年10月31日(月) 14:42 Michael Paquier <michael@paquier.xyz>:\n>\n> Hi all,\n>\n> As per the world clock, the next commit fest will begin in 30 hours\n> (11/1 0:00 AoE time). I may have missed something, but it looks like\n> we have no CFM for this one yet.\n\n** tumbleweed **\n\n> Opinions, thoughts or volunteers?\n\nThis is on my bucket list of things to do some day, so I guess now is as bad a\ntime as any :). Caveat is that this will have to be a personal\nfree-time project,\nso would be good if someone else is around as well.\n\nRegards\n\nIan Barwick\n\n\nI am free. I can help. -- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Tue, 1 Nov 2022 09:56:47 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 09:56:47AM +0530, jian he wrote:\n> I am free. I can help.\n\nCommit fest managers are usually people who have a few years of\nexperience behind the community process. There are plenty of patches\nto review, so feel free to pick up a few things and help moving these,\nof course! Thanks.\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 13:35:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 10:01:05AM +0900, Ian Lawrence Barwick wrote:\n> This is on my bucket list of things to do some day, so I guess now is as bad a\n> time as any :). Caveat is that this will have to be a personal\n> free-time project,\n> so would be good if someone else is around as well.\n\nDon't worry, we all have priorities. I'll be around, so I'll help you\nwith all that, switching the status of the CF in a couple of hours.\n\n(Let's discuss in person if necessary next week, I guess?)\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 13:40:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Mon, 31 Oct 2022 at 05:42, Michael Paquier <michael@paquier.xyz> wrote:\n\n> As per the world clock, the next commit fest will begin in 30 hours\n> (11/1 0:00 AoE time). I may have missed something, but it looks like\n> we have no CFM for this one yet.\n>\n> Opinions, thoughts or volunteers?\n\nIf we have no volunteers, maybe it is because the job of CFM is too\nbig now and needs to be split.\n\nI can offer to be co-CFM, and think I can offer to be CFM for about\n100 patches, but I don't have the time to do the whole thing myself.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:59:33 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月1日(火) 13:40 Michael Paquier <michael@paquier.xyz>:\n>\n> On Tue, Nov 01, 2022 at 10:01:05AM +0900, Ian Lawrence Barwick wrote:\n> > This is on my bucket list of things to do some day, so I guess now is as bad a\n> > time as any :). Caveat is that this will have to be a personal\n> > free-time project,\n> > so would be good if someone else is around as well.\n>\n> Don't worry, we all have priorities. I'll be around, so I'll help you\n> with all that, switching the status of the CF in a couple of hours.\n>\n> (Let's discuss in person if necessary next week, I guess?)\n\nYup, Monday by the looks of it.\n\nMy community login is \"barwick\", in case needed.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 1 Nov 2022 19:10:15 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月1日(火) 17:59 Simon Riggs <simon.riggs@enterprisedb.com>:\n>\n> On Mon, 31 Oct 2022 at 05:42, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > As per the world clock, the next commit fest will begin in 30 hours\n> > (11/1 0:00 AoE time). I may have missed something, but it looks like\n> > we have no CFM for this one yet.\n> >\n> > Opinions, thoughts or volunteers?\n>\n> If we have no volunteers, maybe it is because the job of CFM is too\n> big now and needs to be split.\n\n From casual observation it's been like that for the last few CFs, i.e. there\nhave been effectively two CFMs.\n\nI am on record as volunteering earlier in the thread, with the caveat that there\nwill be a limit on what I can do :).\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 1 Nov 2022 19:15:34 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 07:15:34PM +0900, Ian Lawrence Barwick wrote:\n> From casual observation it's been like that for the last few CFs, i.e. there\n> have been effectively two CFMs.\n> \n> I am on record as volunteering earlier in the thread, with the caveat that there\n> will be a limit on what I can do :).\n\nTwo people showing up to help is really great, thanks! I'll be around\nas well this month, so I'll do my share of patches, as usual.\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 19:55:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 07:10:15PM +0900, Ian Lawrence Barwick wrote:\n> Yup, Monday by the looks of it.\n> \n> My community login is \"barwick\", in case needed.\n\nAdding Magnus in CC, in case you need the admin permissions on the CF\napp. (I have no idea how to do it, and I likely lack the permissions\nto do that anyway.)\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 19:59:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 11:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Nov 01, 2022 at 07:10:15PM +0900, Ian Lawrence Barwick wrote:\n> > Yup, Monday by the looks of it.\n> >\n> > My community login is \"barwick\", in case needed.\n>\n> Adding Magnus in CC, in case you need the admin permissions on the CF\n> app. (I have no idea how to do it, and I likely lack the permissions\n> to do that anyway.)\n>\n>\nPermissions added!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Nov 1, 2022 at 11:59 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Nov 01, 2022 at 07:10:15PM +0900, Ian Lawrence Barwick wrote:\n> Yup, Monday by the looks of it.\n> \n> My community login is \"barwick\", in case needed.\n\nAdding Magnus in CC, in case you need the admin permissions on the CF\napp. (I have no idea how to do it, and I likely lack the permissions\nto do that anyway.)Permissions added! -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 1 Nov 2022 17:36:07 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月2日(水) 1:36 Magnus Hagander <magnus@hagander.net>:>\n> On Tue, Nov 1, 2022 at 11:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Tue, Nov 01, 2022 at 07:10:15PM +0900, Ian Lawrence Barwick wrote:\n>> > Yup, Monday by the looks of it.\n>> >\n>> > My community login is \"barwick\", in case needed.\n>>\n>> Adding Magnus in CC, in case you need the admin permissions on the CF\n>> app. (I have no idea how to do it, and I likely lack the permissions\n>> to do that anyway.)\n>>\n>\n> Permissions added!\n\nThanks, I see the extra menu link.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 2 Nov 2022 08:21:52 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Two people showing up to help is really great, thanks! I'll be around\n> as well this month, so I'll do my share of patches, as usual.\n\nFwiw I can help as well -- starting next week. I can't do much this week though.\n\nI would suggest starting with the cfbot to mark anything that isn't\napplying cleanly and passing tests (and looking for more than design\nfeedback) as Waiting on Author and reminding the author that it's\ncommitfest time and a good time to bring the patch into a clean state.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 2 Nov 2022 06:10:08 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n>\n> On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > Two people showing up to help is really great, thanks! I'll be around\n> > as well this month, so I'll do my share of patches, as usual.\n>\n> Fwiw I can help as well -- starting next week. I can't do much this week though.\n>\n> I would suggest starting with the cfbot to mark anything that isn't\n> applying cleanly and passing tests (and looking for more than design\n> feedback) as Waiting on Author and reminding the author that it's\n> commitfest time and a good time to bring the patch into a clean state.\n\nSounds like a plan; I'll make a start on that today/tomorrow as I have\nsome time.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 3 Nov 2022 11:33:57 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月3日(木) 11:33 Ian Lawrence Barwick <barwick@gmail.com>:\n>\n> 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n> >\n> > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > > Two people showing up to help is really great, thanks! I'll be around\n> > > as well this month, so I'll do my share of patches, as usual.\n> >\n> > Fwiw I can help as well -- starting next week. I can't do much this week though.\n> >\n> > I would suggest starting with the cfbot to mark anything that isn't\n> > applying cleanly and passing tests (and looking for more than design\n> > feedback) as Waiting on Author and reminding the author that it's\n> > commitfest time and a good time to bring the patch into a clean state.\n>\n> Sounds like a plan; I'll make a start on that today/tomorrow as I have\n> some time.\n\nPloughing through the list, initially those where the patches don't apply.\n\nI am wondering what the best thing to do with cases like this is:\n\n https://commitfest.postgresql.org/40/3977/\n\nwhere there were multiple patches in the original post, and some but not all\nwere applied - so those ones are now failing to apply in the cfbot. Should we\nrequest the author to update the thread with those patches which are\nstill pending?\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 3 Nov 2022 18:48:34 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 06:48:34PM +0900, Ian Lawrence Barwick wrote:\n> I am wondering what the best thing to do with cases like this is:\n> \n> https://commitfest.postgresql.org/40/3977/\n> \n> where there were multiple patches in the original post, and some but not all\n> were applied - so those ones are now failing to apply in the cfbot. Should we\n> request the author to update the thread with those patches which are\n> still pending?\n\nA case-by-case analysis is usually adapted, but if subject is not over\nyet, and only a portion of the patches have been addressed, keeping it\naround is the best course of action IMO. The author and/or reviewer\nmay decide otherwise later on, or the patch could always be revisited\nat the end of the CF and marked as committed, though it would be good\nto update the thread to reflect that. By experience, it does not\nhappen that often.\n--\nMichael",
"msg_date": "Thu, 3 Nov 2022 20:58:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 11:33:57AM +0900, Ian Lawrence Barwick wrote:\n> 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n> >\n> > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > > Two people showing up to help is really great, thanks! I'll be around\n> > > as well this month, so I'll do my share of patches, as usual.\n> >\n> > Fwiw I can help as well -- starting next week. I can't do much this week though.\n> >\n> > I would suggest starting with the cfbot to mark anything that isn't\n> > applying cleanly and passing tests (and looking for more than design\n> > feedback) as Waiting on Author and reminding the author that it's\n> > commitfest time and a good time to bring the patch into a clean state.\n> \n> Sounds like a plan; I'll make a start on that today/tomorrow as I have\n> some time.\n\nIf I'm not wrong, Jacob used the CF app to bulk-mail people about\npatches not applying and similar things. That seemed to work well, and\ndoesn't require sending mails to dozens of threads.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 3 Nov 2022 19:43:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月4日(金) 9:43 Justin Pryzby <pryzby@telsasoft.com>:\n>\n> On Thu, Nov 03, 2022 at 11:33:57AM +0900, Ian Lawrence Barwick wrote:\n> > 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n> > >\n> > > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > > Two people showing up to help is really great, thanks! I'll be around\n> > > > as well this month, so I'll do my share of patches, as usual.\n> > >\n> > > Fwiw I can help as well -- starting next week. I can't do much this week though.\n> > >\n> > > I would suggest starting with the cfbot to mark anything that isn't\n> > > applying cleanly and passing tests (and looking for more than design\n> > > feedback) as Waiting on Author and reminding the author that it's\n> > > commitfest time and a good time to bring the patch into a clean state.\n> >\n> > Sounds like a plan; I'll make a start on that today/tomorrow as I have\n> > some time.\n>\n> If I'm not wrong, Jacob used the CF app to bulk-mail people about\n> patches not applying and similar things. That seemed to work well, and\n> doesn't require sending mails to dozens of threads.\n\nI don't see anything like that in the CF app (though I may be looking in the\nwrong place). I also don't see how it would be possible to filter on patches\nnot applying in cbfot, as AFAICT the former is not aware of the latter.\n\nThere is an option for each entry to send an email from the CF app, but it comes\nwith a note \"Please ensure that the email settings for your domain (DKIM, SPF)\nallow emails from external sources.\" which I fear would lead to email\ndelivery issues.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 10:23:34 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月4日(金) 10:23 Ian Lawrence Barwick <barwick@gmail.com>:\n>\n> 2022年11月4日(金) 9:43 Justin Pryzby <pryzby@telsasoft.com>:\n> >\n> > On Thu, Nov 03, 2022 at 11:33:57AM +0900, Ian Lawrence Barwick wrote:\n> > > 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n> > > >\n> > > > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz>\nwrote:\n> > > >\n> > > > > Two people showing up to help is really great, thanks! I'll be\naround\n> > > > > as well this month, so I'll do my share of patches, as usual.\n> > > >\n> > > > Fwiw I can help as well -- starting next week. I can't do much this\nweek though.\n> > > >\n> > > > I would suggest starting with the cfbot to mark anything that isn't\n> > > > applying cleanly and passing tests (and looking for more than design\n> > > > feedback) as Waiting on Author and reminding the author that it's\n> > > > commitfest time and a good time to bring the patch into a clean\nstate.\n> > >\n> > > Sounds like a plan; I'll make a start on that today/tomorrow as I have\n> > > some time.\n> >\n> > If I'm not wrong, Jacob used the CF app to bulk-mail people about\n> > patches not applying and similar things. That seemed to work well, and\n> > doesn't require sending mails to dozens of threads.\n>\n> I don't see anything like that in the CF app (though I may be looking in\nthe\n> wrong place). I also don't see how it would be possible to filter on\npatches\n> not applying in cbfot, as AFAICT the former is not aware of the\nlatter.Also, having gone through all the cfbot items with non-applying\npatches (single\nred \"X\"), sending a reminder without checking further doesn't seem the right\nthing tod do - in two cases the patch was not applying because it had\nalready\nbeen committed, and with another the consensus was to return it with\nfeedback.\nWith others, it's obvious the threads were recently active and I don't\nthink a\nreminder is necessary right now.\n\nPlease do however let me know if I should be doing something differently.\n\nAnyway changes since yesterday:\n\n Needs review: 164 -> 156\n Waiting on Author: 64 -> 68\n Ready for Committer: 22 -> 21\n Committed: 43 -> 47\n Withdrawn: 9 -> 9\n Rejected: 1 -> 1\n Returned with Feedback: 4 -> 5\n\nFollowing entries are reported with patch apply failure, but it's less clear\n(to me) whether a simple request to update the patch is what is needed at\nthis point, because e.g. the thread is long and complex, or has been fairly\ninactive for a while:\n\n- \"AcquireExecutorLocks() and run-time pruning\"\n https://commitfest.postgresql.org/40/3478/\n\n- \"pg_stat_activity: avoid showing state=active with wait_event=ClientRead\"\n https://commitfest.postgresql.org/40/3760/\n\n- \"logical decoding and replication of sequences, take 2\"\n https://commitfest.postgresql.org/40/3823/\n\n- \"Lazy JIT IR code generation to increase JIT speed with partitions\"\n https://commitfest.postgresql.org/40/3071/\n\n- \"Collation version and dependency helpers\"\n https://commitfest.postgresql.org/40/3977/\n\n- \"Time-delayed logical replication subscriber\"\n http://cfbot.cputube.org/patch_40_3581.log\n\n- \"Fix checkpointer sync request queue problems\"\n https://commitfest.postgresql.org/40/3583/\n\n- \"Nonreplayable XLog records by means of overflows and >MaxAllocSize\nlengths\"\n https://commitfest.postgresql.org/40/3590/\n\n- \"Transparent column encryption\"\n https://commitfest.postgresql.org/40/3718/\n\nThe following have all received requests to update the patch(s), and\nhave been set to \"Waiting on Author\" where not already done:\n\n- \"XID formatting and SLRU refactorings (Independent part of: Add 64-bit\nXIDs into PostgreSQL 15)\"\n https://commitfest.postgresql.org/40/3489/\n (-> patches already updated; changed back to \"WfC\")\n\n- \"Add index scan progress to pg_stat_progress_vacuum\"\n https://commitfest.postgresql.org/40/3617/\n\n- \"Adding CommandID to heap xlog records\"\n https://commitfest.postgresql.org/40/3882/\n\n- \"Add semi-join pushdown to postgres_fdw\"\n https://commitfest.postgresql.org/40/3838/\n\n- \"Completed unaccent dictionary with many missing characters\"\n https://commitfest.postgresql.org/40/3631/\n\n- \"Data is copied twice when specifying both child and parent table in\npublication\"\n https://commitfest.postgresql.org/40/3623/\n\n- \"Fix ExecRTCheckPerms() inefficiency with many prunable partitions \"\n https://commitfest.postgresql.org/40/3224/\n\n- \"In-place persistence change of a relation\"\n https://commitfest.postgresql.org/40/3461/\n\n- \"Move SLRU data into the regular buffer pool\"\n https://commitfest.postgresql.org/40/3514/\n\n- \"New [relation] options engine\"\n https://commitfest.postgresql.org/40/3536/\n\n- \"Nonreplayable XLog records by means of overflows and >MaxAllocSize\nlengths\"\n https://commitfest.postgresql.org/40/3590/\n\n- \"Page compression for OLTP\"\n https://commitfest.postgresql.org/40/3783/\n\n- \"Provide the facility to set binary format output for specific OID's per\nsession\"\n https://commitfest.postgresql.org/40/3777/\n\n- \"Reducing power consumption when idle\"\n https://commitfest.postgresql.org/40/3566/\n\n- \"Reuse Workers and Replication Slots during Logical Replication\"\n https://commitfest.postgresql.org/40/3784/\n\n- \"Skip replicating the tables specified in except table option\"\n https://commitfest.postgresql.org/40/3646/\n\n- \"Teach pg_waldump to extract FPIs from the WAL stream\"\n https://commitfest.postgresql.org/40/3628/\n\n- \"[PATCH] Equivalence Class Filters\"\n https://commitfest.postgresql.org/40/3524/\n\n- \"improve handling for misconfigured archiving parameters\"\n https://commitfest.postgresql.org/40/3933/\n\n- \"logical decoding and replication of sequences, take 2\"\n https://commitfest.postgresql.org/40/3823/\n\nFollowing open entries had actually already been committed:\n\n- Improve description of XLOG_RUNNING_XACTS\n https://commitfest.postgresql.org/40/3779/\n\n- \"Add native windows on arm64 support\"\n https://commitfest.postgresql.org/40/3561/\n\nFollowing entry was set to \"Returned with feedback\":\n\n- \"On client login event trigger\"\n https://commitfest.postgresql.org/40/2900/\n\n\nRegards\n\nIan Barwick\n\n2022年11月4日(金) 10:23 Ian Lawrence Barwick <barwick@gmail.com>:>> 2022年11月4日(金) 9:43 Justin Pryzby <pryzby@telsasoft.com>:> >> > On Thu, Nov 03, 2022 at 11:33:57AM +0900, Ian Lawrence Barwick wrote:> > > 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:> > > >> > > > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:> > > >> > > > > Two people showing up to help is really great, thanks! I'll be around> > > > > as well this month, so I'll do my share of patches, as usual.> > > >> > > > Fwiw I can help as well -- starting next week. I can't do much this week though.> > > >> > > > I would suggest starting with the cfbot to mark anything that isn't> > > > applying cleanly and passing tests (and looking for more than design> > > > feedback) as Waiting on Author and reminding the author that it's> > > > commitfest time and a good time to bring the patch into a clean state.> > >> > > Sounds like a plan; I'll make a start on that today/tomorrow as I have> > > some time.> >> > If I'm not wrong, Jacob used the CF app to bulk-mail people about> > patches not applying and similar things. That seemed to work well, and> > doesn't require sending mails to dozens of threads.>> I don't see anything like that in the CF app (though I may be looking in the> wrong place). I also don't see how it would be possible to filter on patches> not applying in cbfot, as AFAICT the former is not aware of the latter.Also, having gone through all the cfbot items with non-applying patches (singlered \"X\"), sending a reminder without checking further doesn't seem the rightthing tod do - in two cases the patch was not applying because it had alreadybeen committed, and with another the consensus was to return it with feedback.With others, it's obvious the threads were recently active and I don't think areminder is necessary right now.Please do however let me know if I should be doing something differently.Anyway changes since yesterday: Needs review: 164 -> 156 Waiting on Author: 64 -> 68 Ready for Committer: 22 -> 21 Committed: 43 -> 47 Withdrawn: 9 -> 9 Rejected: 1 -> 1 Returned with Feedback: 4 -> 5Following entries are reported with patch apply failure, but it's less clear(to me) whether a simple request to update the patch is what is needed atthis point, because e.g. the thread is long and complex, or has been fairlyinactive for a while:- \"AcquireExecutorLocks() and run-time pruning\" https://commitfest.postgresql.org/40/3478/- \"pg_stat_activity: avoid showing state=active with wait_event=ClientRead\" https://commitfest.postgresql.org/40/3760/- \"logical decoding and replication of sequences, take 2\" https://commitfest.postgresql.org/40/3823/- \"Lazy JIT IR code generation to increase JIT speed with partitions\" https://commitfest.postgresql.org/40/3071/- \"Collation version and dependency helpers\" https://commitfest.postgresql.org/40/3977/- \"Time-delayed logical replication subscriber\" http://cfbot.cputube.org/patch_40_3581.log- \"Fix checkpointer sync request queue problems\" https://commitfest.postgresql.org/40/3583/- \"Nonreplayable XLog records by means of overflows and >MaxAllocSize lengths\" https://commitfest.postgresql.org/40/3590/- \"Transparent column encryption\" https://commitfest.postgresql.org/40/3718/The following have all received requests to update the patch(s), andhave been set to \"Waiting on Author\" where not already done:- \"XID formatting and SLRU refactorings (Independent part of: Add 64-bit XIDs into PostgreSQL 15)\" https://commitfest.postgresql.org/40/3489/ (-> patches already updated; changed back to \"WfC\")- \"Add index scan progress to pg_stat_progress_vacuum\" https://commitfest.postgresql.org/40/3617/- \"Adding CommandID to heap xlog records\" https://commitfest.postgresql.org/40/3882/- \"Add semi-join pushdown to postgres_fdw\" https://commitfest.postgresql.org/40/3838/- \"Completed unaccent dictionary with many missing characters\" https://commitfest.postgresql.org/40/3631/- \"Data is copied twice when specifying both child and parent table in publication\" https://commitfest.postgresql.org/40/3623/- \"Fix ExecRTCheckPerms() inefficiency with many prunable partitions \" https://commitfest.postgresql.org/40/3224/- \"In-place persistence change of a relation\" https://commitfest.postgresql.org/40/3461/- \"Move SLRU data into the regular buffer pool\" https://commitfest.postgresql.org/40/3514/- \"New [relation] options engine\" https://commitfest.postgresql.org/40/3536/- \"Nonreplayable XLog records by means of overflows and >MaxAllocSize lengths\" https://commitfest.postgresql.org/40/3590/- \"Page compression for OLTP\" https://commitfest.postgresql.org/40/3783/- \"Provide the facility to set binary format output for specific OID's per session\" https://commitfest.postgresql.org/40/3777/- \"Reducing power consumption when idle\" https://commitfest.postgresql.org/40/3566/- \"Reuse Workers and Replication Slots during Logical Replication\" https://commitfest.postgresql.org/40/3784/- \"Skip replicating the tables specified in except table option\" https://commitfest.postgresql.org/40/3646/- \"Teach pg_waldump to extract FPIs from the WAL stream\" https://commitfest.postgresql.org/40/3628/- \"[PATCH] Equivalence Class Filters\" https://commitfest.postgresql.org/40/3524/- \"improve handling for misconfigured archiving parameters\" https://commitfest.postgresql.org/40/3933/- \"logical decoding and replication of sequences, take 2\" https://commitfest.postgresql.org/40/3823/Following open entries had actually already been committed:- Improve description of XLOG_RUNNING_XACTS https://commitfest.postgresql.org/40/3779/- \"Add native windows on arm64 support\" https://commitfest.postgresql.org/40/3561/Following entry was set to \"Returned with feedback\":- \"On client login event trigger\" https://commitfest.postgresql.org/40/2900/RegardsIan Barwick",
"msg_date": "Fri, 4 Nov 2022 14:18:17 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On 11/3/22 22:18, Ian Lawrence Barwick wrote:\n> 2022年11月4日(金) 10:23 Ian Lawrence Barwick <barwick@gmail.com\n> <mailto:barwick@gmail.com>>:\n>> 2022年11月4日(金) 9:43 Justin Pryzby <pryzby@telsasoft.com\n> <mailto:pryzby@telsasoft.com>>:\n>> > If I'm not wrong, Jacob used the CF app to bulk-mail people about\n>> > patches not applying and similar things. That seemed to work well, and\n>> > doesn't require sending mails to dozens of threads.\n>>\n>> I don't see anything like that in the CF app (though I may be looking in the\n>> wrong place).\n\nI just used the \"email author\" checkboxes.\n\n>> I also don't see how it would be possible to filter on patches\n>> not applying in cbfot, as AFAICT the former is not aware of the latter.\n\nThat was the hard part. I ended up manually merging the two pages locally.\n\n> Also, having gone through all the cfbot items with non-applying \n> patches (single red \"X\"), sending a reminder without checking\n> further doesn't seem the right thing tod do - in two cases the patch\n> was not applying because it had already been committed, and with\n> another the consensus was to return it with feedback. With others,\n> it's obvious the threads were recently active and I don't think a\n> reminder is necessary right now.\n\nTrue. One nice thing about the author-only email is that, as long as you\ndon't send reminders too often (I think there'd been talk before of\nonce, maximum twice, per CF?) then if an author feels there's no reason\nto take action, they don't have to. That low-effort strategy also scales\na bit better than making a CFM scan manually, and it's a bit closer in\nmy opinion to the automated reminder feature that's been requested\nfrequently.\n\n> There is an option for each entry to send an email from the CF app, but it comes\n> with a note \"Please ensure that the email settings for your domain (DKIM, SPF)\n> allow emails from external sources.\" which I fear would lead to email\n> delivery issues.\n\nI know some of my bulk emails were delivered to spam folders, so it is a\nfair concern.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 4 Nov 2022 13:18:06 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 07:43:03PM -0500, Justin Pryzby wrote:\n> On Thu, Nov 03, 2022 at 11:33:57AM +0900, Ian Lawrence Barwick wrote:\n> > 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n> > > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > > Two people showing up to help is really great, thanks! I'll be around\n> > > > as well this month, so I'll do my share of patches, as usual.\n> > >\n> > > Fwiw I can help as well -- starting next week. I can't do much this week though.\n> > >\n> > > I would suggest starting with the cfbot to mark anything that isn't\n> > > applying cleanly and passing tests (and looking for more than design\n> > > feedback) as Waiting on Author and reminding the author that it's\n> > > commitfest time and a good time to bring the patch into a clean state.\n> > \n> > Sounds like a plan; I'll make a start on that today/tomorrow as I have\n> > some time.\n> \n> If I'm not wrong, Jacob used the CF app to bulk-mail people about\n> patches not applying and similar things. That seemed to work well, and\n> doesn't require sending mails to dozens of threads.\n\nIf my script is not wrong, these patches add TAP tests, but don't update\nthe requisite ./meson.build file. It seems like it'd be reasonable to\nset them all as WOA until that's done.\n\n$ for a in `git branch -a |sort |grep commitfest/40`; do : echo \"$a...\"; x=`git log -1 --compact-summary \"$a\"`; echo \"$x\" |grep '/t/.*pl.*new' >/dev/null || continue; echo \"$x\" |grep -Fw meson >/dev/null && continue; git log -1 --oneline \"$a\"; done\n... [CF 40/3558] Allow file inclusion in pg_hba and pg_ident files\n... [CF 40/3628] Teach pg_waldump to extract FPIs from the WAL stream\n... [CF 40/3646] Skip replicating the tables specified in except table option\n... [CF 40/3663] Switching XLog source from archive to streaming when primary available\n... [CF 40/3670] pg_rewind: warn when checkpoint hasn't happened after promotion\n... [CF 40/3729] Testing autovacuum wraparound\n... [CF 40/3877] vacuumlo: add test to vacuumlo for test coverage\n... [CF 40/3985] TDE key management patches\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 8 Nov 2022 17:12:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月9日(水) 8:12 Justin Pryzby <pryzby@telsasoft.com>:\n>\n> On Thu, Nov 03, 2022 at 07:43:03PM -0500, Justin Pryzby wrote:\n> > On Thu, Nov 03, 2022 at 11:33:57AM +0900, Ian Lawrence Barwick wrote:\n> > > 2022年11月2日(水) 19:10 Greg Stark <stark@mit.edu>:\n> > > > On Tue, 1 Nov 2022 at 06:56, Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > > Two people showing up to help is really great, thanks! I'll be around\n> > > > > as well this month, so I'll do my share of patches, as usual.\n> > > >\n> > > > Fwiw I can help as well -- starting next week. I can't do much this week though.\n> > > >\n> > > > I would suggest starting with the cfbot to mark anything that isn't\n> > > > applying cleanly and passing tests (and looking for more than design\n> > > > feedback) as Waiting on Author and reminding the author that it's\n> > > > commitfest time and a good time to bring the patch into a clean state.\n> > >\n> > > Sounds like a plan; I'll make a start on that today/tomorrow as I have\n> > > some time.\n> >\n> > If I'm not wrong, Jacob used the CF app to bulk-mail people about\n> > patches not applying and similar things. That seemed to work well, and\n> > doesn't require sending mails to dozens of threads.\n>\n> If my script is not wrong, these patches add TAP tests, but don't update\n> the requisite ./meson.build file. It seems like it'd be reasonable to\n> set them all as WOA until that's done.\n>\n> $ for a in `git branch -a |sort |grep commitfest/40`; do : echo \"$a...\"; x=`git log -1 --compact-summary \"$a\"`; echo \"$x\" |grep '/t/.*pl.*new' >/dev/null || continue; echo \"$x\" |grep -Fw meson >/dev/null && continue; git log -1 --oneline \"$a\"; done\n> ... [CF 40/3558] Allow file inclusion in pg_hba and pg_ident files\n> ... [CF 40/3628] Teach pg_waldump to extract FPIs from the WAL stream\n> ... [CF 40/3646] Skip replicating the tables specified in except table option\n> ... [CF 40/3663] Switching XLog source from archive to streaming when primary available\n> ... [CF 40/3670] pg_rewind: warn when checkpoint hasn't happened after promotion\n> ... [CF 40/3729] Testing autovacuum wraparound\n> ... [CF 40/3877] vacuumlo: add test to vacuumlo for test coverage\n> ... [CF 40/3985] TDE key management patches\n\nLooks like your script is correct, will update accordingly.\n\nDo we have a FAQ/checklist of meson things to consider for patches anywhere?\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Mon, 14 Nov 2022 21:08:46 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 7:08 AM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年11月9日(水) 8:12 Justin Pryzby <pryzby@telsasoft.com>:\n....\n> > If my script is not wrong, these patches add TAP tests, but don't update\n> > the requisite ./meson.build file. It seems like it'd be reasonable to\n> > set them all as WOA until that's done.\n> >\n> > $ for a in `git branch -a |sort |grep commitfest/40`; do : echo \"$a...\"; x=`git log -1 --compact-summary \"$a\"`; echo \"$x\" |grep '/t/.*pl.*new' >/dev/null || continue; echo \"$x\" |grep -Fw meson >/dev/null && continue; git log -1 --oneline \"$a\"; done\n> > ... [CF 40/3558] Allow file inclusion in pg_hba and pg_ident files\n> > ... [CF 40/3628] Teach pg_waldump to extract FPIs from the WAL stream\n> > ... [CF 40/3646] Skip replicating the tables specified in except table option\n> > ... [CF 40/3663] Switching XLog source from archive to streaming when primary available\n> > ... [CF 40/3670] pg_rewind: warn when checkpoint hasn't happened after promotion\n> > ... [CF 40/3729] Testing autovacuum wraparound\n> > ... [CF 40/3877] vacuumlo: add test to vacuumlo for test coverage\n> > ... [CF 40/3985] TDE key management patches\n>\n> Looks like your script is correct, will update accordingly.\n>\n> Do we have a FAQ/checklist of meson things to consider for patches anywhere?\n\nIt's possible this has been discussed before, but it seems less than\nideal to have notifications about moving to WOA be sent only in a bulk\nemail hanging off the \"current CF\" email chain as opposed to being\nsent to the individual threads. Perhaps that's something the app\nshould do for us in this situation. Without that though the patch\nauthors are left to wade through unrelated discussion, and, probably\nmore importantly, the patch discussion thread doesn't show the current\nstate (I think bumping there is more likely to prompt activity as\nwell).\n\nJames Coleman\n\n\n",
"msg_date": "Mon, 14 Nov 2022 08:23:34 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月14日(月) 22:23 James Coleman <jtc331@gmail.com>:\n>\n> On Mon, Nov 14, 2022 at 7:08 AM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> >\n> > 2022年11月9日(水) 8:12 Justin Pryzby <pryzby@telsasoft.com>:\n> ....\n> > > If my script is not wrong, these patches add TAP tests, but don't update\n> > > the requisite ./meson.build file. It seems like it'd be reasonable to\n> > > set them all as WOA until that's done.\n> > >\n> > > $ for a in `git branch -a |sort |grep commitfest/40`; do : echo \"$a...\"; x=`git log -1 --compact-summary \"$a\"`; echo \"$x\" |grep '/t/.*pl.*new' >/dev/null || continue; echo \"$x\" |grep -Fw meson >/dev/null && continue; git log -1 --oneline \"$a\"; done\n> > > ... [CF 40/3558] Allow file inclusion in pg_hba and pg_ident files\n> > > ... [CF 40/3628] Teach pg_waldump to extract FPIs from the WAL stream\n> > > ... [CF 40/3646] Skip replicating the tables specified in except table option\n> > > ... [CF 40/3663] Switching XLog source from archive to streaming when primary available\n> > > ... [CF 40/3670] pg_rewind: warn when checkpoint hasn't happened after promotion\n> > > ... [CF 40/3729] Testing autovacuum wraparound\n> > > ... [CF 40/3877] vacuumlo: add test to vacuumlo for test coverage\n> > > ... [CF 40/3985] TDE key management patches\n> >\n> > Looks like your script is correct, will update accordingly.\n> >\n>\n> It's possible this has been discussed before, but it seems less than\n> ideal to have notifications about moving to WOA be sent only in a bulk\n> email hanging off the \"current CF\" email chain as opposed to being\n> sent to the individual threads. Perhaps that's something the app\n> should do for us in this situation. Without that though the patch\n> authors are left to wade through unrelated discussion, and, probably\n> more importantly, the patch discussion thread doesn't show the current\n> state (I think bumping there is more likely to prompt activity as\n> well).\n\nFWIW I've been manually \"bumping\" the respective threads, which is somewhat\ntime-consuming but seems to have been quite productive in terms of getting\npatches updated.\n\nWill do same for the above once I've confirmed what is being requested,\n(which I presume is adding the new tests to the 'tests' array in the respective\n\"meson.build\" file; just taking the opportunity to familiariize myself with it).\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Mon, 14 Nov 2022 22:38:28 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "2022年11月14日(月) 22:38 Ian Lawrence Barwick <barwick@gmail.com>:\n>\n> 2022年11月14日(月) 22:23 James Coleman <jtc331@gmail.com>:\n> >\n> > On Mon, Nov 14, 2022 at 7:08 AM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > >\n> > > 2022年11月9日(水) 8:12 Justin Pryzby <pryzby@telsasoft.com>:\n> > ....\n> > > > If my script is not wrong, these patches add TAP tests, but don't update\n> > > > the requisite ./meson.build file. It seems like it'd be reasonable to\n> > > > set them all as WOA until that's done.\n> > > >\n> > > > $ for a in `git branch -a |sort |grep commitfest/40`; do : echo \"$a...\"; x=`git log -1 --compact-summary \"$a\"`; echo \"$x\" |grep '/t/.*pl.*new' >/dev/null || continue; echo \"$x\" |grep -Fw meson >/dev/null && continue; git log -1 --oneline \"$a\"; done\n> > > > ... [CF 40/3558] Allow file inclusion in pg_hba and pg_ident files\n> > > > ... [CF 40/3628] Teach pg_waldump to extract FPIs from the WAL stream\n> > > > ... [CF 40/3646] Skip replicating the tables specified in except table option\n> > > > ... [CF 40/3663] Switching XLog source from archive to streaming when primary available\n> > > > ... [CF 40/3670] pg_rewind: warn when checkpoint hasn't happened after promotion\n> > > > ... [CF 40/3729] Testing autovacuum wraparound\n> > > > ... [CF 40/3877] vacuumlo: add test to vacuumlo for test coverage\n> > > > ... [CF 40/3985] TDE key management patches\n> > >\n> > > Looks like your script is correct, will update accordingly.\n> > >\n> >\n> > It's possible this has been discussed before, but it seems less than\n> > ideal to have notifications about moving to WOA be sent only in a bulk\n> > email hanging off the \"current CF\" email chain as opposed to being\n> > sent to the individual threads. Perhaps that's something the app\n> > should do for us in this situation. Without that though the patch\n> > authors are left to wade through unrelated discussion, and, probably\n> > more importantly, the patch discussion thread doesn't show the current\n> > state (I think bumping there is more likely to prompt activity as\n> > well).\n>\n> FWIW I've been manually \"bumping\" the respective threads, which is somewhat\n> time-consuming but seems to have been quite productive in terms of getting\n> patches updated.\n>\n> Will do same for the above once I've confirmed what is being requested,\n> (which I presume is adding the new tests to the 'tests' array in the respective\n> \"meson.build\" file; just taking the opportunity to familiariize myself with it).\n\nVarious mails since sent to the appropriate threads; I took the opportunity\nto create a short wiki page:\n\n https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n\nwith relevant details (AFAICS) for anyone not familiar with meson; corrections/\nimprovements welcome.\n\nIn the meantime I notice a number of patches in cfbot are currently failing on\nTAP test \"test_custom_rmgrs/t/001_basic.pl\" in some environments. This was\nadded the other day in commit ae168c794f; there is already a fix for the issue\n( 36e0358e70 ) but the cfbot doesn't have that commit yet.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 16 Nov 2022 22:00:35 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "Hi,\n\nThe Commitfest 2022-11 status still shows as \"In Progress\", Shouldn't\nthe status be changed to \"Closed\" and the entries be moved to the next\ncommitfest.\n\nRegards,\nVignesh\n\nOn Wed, 16 Nov 2022 at 18:30, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年11月14日(月) 22:38 Ian Lawrence Barwick <barwick@gmail.com>:\n> >\n> > 2022年11月14日(月) 22:23 James Coleman <jtc331@gmail.com>:\n> > >\n> > > On Mon, Nov 14, 2022 at 7:08 AM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > > >\n> > > > 2022年11月9日(水) 8:12 Justin Pryzby <pryzby@telsasoft.com>:\n> > > ....\n> > > > > If my script is not wrong, these patches add TAP tests, but don't update\n> > > > > the requisite ./meson.build file. It seems like it'd be reasonable to\n> > > > > set them all as WOA until that's done.\n> > > > >\n> > > > > $ for a in `git branch -a |sort |grep commitfest/40`; do : echo \"$a...\"; x=`git log -1 --compact-summary \"$a\"`; echo \"$x\" |grep '/t/.*pl.*new' >/dev/null || continue; echo \"$x\" |grep -Fw meson >/dev/null && continue; git log -1 --oneline \"$a\"; done\n> > > > > ... [CF 40/3558] Allow file inclusion in pg_hba and pg_ident files\n> > > > > ... [CF 40/3628] Teach pg_waldump to extract FPIs from the WAL stream\n> > > > > ... [CF 40/3646] Skip replicating the tables specified in except table option\n> > > > > ... [CF 40/3663] Switching XLog source from archive to streaming when primary available\n> > > > > ... [CF 40/3670] pg_rewind: warn when checkpoint hasn't happened after promotion\n> > > > > ... [CF 40/3729] Testing autovacuum wraparound\n> > > > > ... [CF 40/3877] vacuumlo: add test to vacuumlo for test coverage\n> > > > > ... [CF 40/3985] TDE key management patches\n> > > >\n> > > > Looks like your script is correct, will update accordingly.\n> > > >\n> > >\n> > > It's possible this has been discussed before, but it seems less than\n> > > ideal to have notifications about moving to WOA be sent only in a bulk\n> > > email hanging off the \"current CF\" email chain as opposed to being\n> > > sent to the individual threads. Perhaps that's something the app\n> > > should do for us in this situation. Without that though the patch\n> > > authors are left to wade through unrelated discussion, and, probably\n> > > more importantly, the patch discussion thread doesn't show the current\n> > > state (I think bumping there is more likely to prompt activity as\n> > > well).\n> >\n> > FWIW I've been manually \"bumping\" the respective threads, which is somewhat\n> > time-consuming but seems to have been quite productive in terms of getting\n> > patches updated.\n> >\n> > Will do same for the above once I've confirmed what is being requested,\n> > (which I presume is adding the new tests to the 'tests' array in the respective\n> > \"meson.build\" file; just taking the opportunity to familiariize myself with it).\n>\n> Various mails since sent to the appropriate threads; I took the opportunity\n> to create a short wiki page:\n>\n> https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n>\n> with relevant details (AFAICS) for anyone not familiar with meson; corrections/\n> improvements welcome.\n>\n> In the meantime I notice a number of patches in cfbot are currently failing on\n> TAP test \"test_custom_rmgrs/t/001_basic.pl\" in some environments. This was\n> added the other day in commit ae168c794f; there is already a fix for the issue\n> ( 36e0358e70 ) but the cfbot doesn't have that commit yet.\n>\n> Regards\n>\n> Ian Barwick\n\n\n",
"msg_date": "Wed, 7 Dec 2022 08:25:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest 2022-11"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 08:25:25AM +0530, vignesh C wrote:\n> The Commitfest 2022-11 status still shows as \"In Progress\", Shouldn't\n> the status be changed to \"Closed\" and the entries be moved to the next\n> commitfest.\n\nYes, Ian has told me that he is on it this week.\n--\nMichael",
"msg_date": "Wed, 7 Dec 2022 12:20:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest 2022-11"
}
] |
[
{
"msg_contents": "I noticed that some (not all) callers didn't check the return value of \npclose() or ClosePipeStream() correctly. Either they didn't check it at \nall or they treated it like the return of fclose(). Here is a patch \nwith fixes.\n\n(A failure to run the command issued by popen() is usually reported via \nthe pclose() status, so while you can often get away with not checking \nfclose() or close(), checking pclose() is more often useful.)\n\nThere are some places where the return value is apparently intentionally \nignored, such as in error recovery paths, or psql ignoring a failure to \nlaunch the pager. (The intention can usually be inferred by the kind of \nerror checking attached to the corresponding popen() call.) But there \nare a few places in psql that I'm suspicious about that I have marked, \nbut need to think about further.",
"msg_date": "Mon, 31 Oct 2022 09:12:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Check return value of pclose() correctly"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 09:12:53AM +0100, Peter Eisentraut wrote:\n> I noticed that some (not all) callers didn't check the return value of\n> pclose() or ClosePipeStream() correctly. Either they didn't check it at all\n> or they treated it like the return of fclose(). Here is a patch with fixes.\n> \n> (A failure to run the command issued by popen() is usually reported via the\n> pclose() status, so while you can often get away with not checking fclose()\n> or close(), checking pclose() is more often useful.)\n \n- if (WIFEXITED(exitstatus))\n+ if (exitstatus == -1)\n+ {\n+ snprintf(str, sizeof(str), \"%m\");\n+ }\nThis addition in wait_result_to_str() looks inconsistent with the \nexisting callers of pclose() and ClosePipeStream() that check for -1\nas exit status. copyfrom.c and basebackup_to_shell.c fall into this\ncategory. Wouldn't it be better to unify everything?\n\n> There are some places where the return value is apparently intentionally\n> ignored, such as in error recovery paths, or psql ignoring a failure to\n> launch the pager. (The intention can usually be inferred by the kind of\n> error checking attached to the corresponding popen() call.) But there are a\n> few places in psql that I'm suspicious about that I have marked, but need to\n> think about further.\n\nHmm. I would leave these out, I think. setQFout() relies on the\nresult of openQueryOutputFile(). And this could make commands like\n\\watch less reliable.\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 14:35:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Check return value of pclose() correctly"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Oct 31, 2022 at 09:12:53AM +0100, Peter Eisentraut wrote:\n>> (A failure to run the command issued by popen() is usually reported via the\n>> pclose() status, so while you can often get away with not checking fclose()\n>> or close(), checking pclose() is more often useful.)\n \n> - if (WIFEXITED(exitstatus))\n> + if (exitstatus == -1)\n> + {\n> + snprintf(str, sizeof(str), \"%m\");\n> + }\n> This addition in wait_result_to_str() looks inconsistent with the \n> existing callers of pclose() and ClosePipeStream() that check for -1\n> as exit status. copyfrom.c and basebackup_to_shell.c fall into this\n> category. Wouldn't it be better to unify everything?\n\nI think there are two issues here. POSIX says\n\n Upon successful return, pclose() shall return the termination status\n of the command language interpreter. Otherwise, pclose() shall return\n -1 and set errno to indicate the error.\n\nThat is, first you need to make sure that pclose returned a valid\nchild process status, and then you need to decode that status.\nIt's not obvious to me that -1 is disjoint from the set of possible\nchild statuses. Do we need to add some logic that clears and then\nchecks errno?\n\nAlso, we have a number of places --- at least FreeDesc() and\nClosePipeStream() --- that consider pclose()'s return value to be\nperfectly equivalent to that of close() etc, because they'll\nreturn either one without telling the caller which is which.\nIt seems like we have to fix that if we want to issue sane\nerror reports.\n\nThis patch isn't moving things forward on this fundamental\nconfusion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Nov 2022 01:52:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check return value of pclose() correctly"
},
{
"msg_contents": "On 01.11.22 06:35, Michael Paquier wrote:\n> - if (WIFEXITED(exitstatus))\n> + if (exitstatus == -1)\n> + {\n> + snprintf(str, sizeof(str), \"%m\");\n> + }\n> This addition in wait_result_to_str() looks inconsistent with the\n> existing callers of pclose() and ClosePipeStream() that check for -1\n> as exit status. copyfrom.c and basebackup_to_shell.c fall into this\n> category. Wouldn't it be better to unify everything?\n\nWith the above addition, the extra check for -1 at those existing places \ncould be removed.\n\n>> There are some places where the return value is apparently intentionally\n>> ignored, such as in error recovery paths, or psql ignoring a failure to\n>> launch the pager. (The intention can usually be inferred by the kind of\n>> error checking attached to the corresponding popen() call.) But there are a\n>> few places in psql that I'm suspicious about that I have marked, but need to\n>> think about further.\n> \n> Hmm. I would leave these out, I think. setQFout() relies on the\n> result of openQueryOutputFile(). And this could make commands like\n> \\watch less reliable.\n\nI don't quite understand what you are saying here. My point is that, \nfor example, setQFout() thinks it's important to check the result of \npopen() and write an error message, but it doesn't check the result of \npclose() at all. I don't think that makes sense in practice.\n\n\n\n",
"msg_date": "Tue, 1 Nov 2022 16:30:50 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Check return value of pclose() correctly"
},
{
"msg_contents": "On 01.11.22 06:52, Tom Lane wrote:\n> I think there are two issues here. POSIX says\n> \n> Upon successful return, pclose() shall return the termination status\n> of the command language interpreter. Otherwise, pclose() shall return\n> -1 and set errno to indicate the error.\n> \n> That is, first you need to make sure that pclose returned a valid\n> child process status, and then you need to decode that status.\n> It's not obvious to me that -1 is disjoint from the set of possible\n> child statuses. Do we need to add some logic that clears and then\n> checks errno?\n\nThis return convention is also used by system() and is widely used. So \nI don't think we need to be concerned about this.\n\nIn practice, int is 4 bytes and WEXITSTATUS() and WTERMSIG() are one \nbyte each, so they are probably in the lower bytes, and wouldn't \naccidentally make up a -1.\n\n> Also, we have a number of places --- at least FreeDesc() and\n> ClosePipeStream() --- that consider pclose()'s return value to be\n> perfectly equivalent to that of close() etc, because they'll\n> return either one without telling the caller which is which.\n> It seems like we have to fix that if we want to issue sane\n> error reports.\n\nI think this works. FreeDesc() returns the pclose() exit status to \nClosePipeStream(), which returns it directly. No interpretation is done \nwithin these functions.\n\n\n\n",
"msg_date": "Tue, 1 Nov 2022 16:41:15 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Check return value of pclose() correctly"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi Peter,\r\nThis is a review of the pclose return value check patch:\r\n\r\nContents & Purpose:\r\nPurpose of this patch is to properly handle return value of pclose (indirectly, return from ClosePipeStream).\r\n\r\nInitial Run:\r\nThe patch applies cleanly to HEAD. The regression tests all pass\r\nsuccessfully against the new patch.\r\n\r\nConclusion:\r\nAt some places pclose return value is handled and this patch adds return value check for remaining values. \r\nImplementation is in sync with existing handling of pclose.\r\n\r\n- Ankit K Pandey",
"msg_date": "Wed, 02 Nov 2022 15:26:53 +0000",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check return value of pclose() correctly"
},
{
"msg_contents": "On 01.11.22 21:30, Peter Eisentraut wrote:\n>>> There are some places where the return value is apparently intentionally\n>>> ignored, such as in error recovery paths, or psql ignoring a failure to\n>>> launch the pager. (The intention can usually be inferred by the kind of\n>>> error checking attached to the corresponding popen() call.) But \n>>> there are a\n>>> few places in psql that I'm suspicious about that I have marked, but \n>>> need to\n>>> think about further.\n>>\n>> Hmm. I would leave these out, I think. setQFout() relies on the\n>> result of openQueryOutputFile(). And this could make commands like\n>> \\watch less reliable.\n> \n> I don't quite understand what you are saying here. My point is that, \n> for example, setQFout() thinks it's important to check the result of \n> popen() and write an error message, but it doesn't check the result of \n> pclose() at all. I don't think that makes sense in practice.\n\nI have looked this over again. In these cases, if the piped-to command \nis faulty, you get a \"broken pipe\" error anyway, so the effect of not \nchecking the pclose() result is negligible. So I have removed the \n\"FIXME\" markers without further action.\n\nThere is also the question whether we want to check the exit status of a \nuser-supplied command, such as in pgbench's \\setshell. I have dialed \nback my patch there, since I don't know what the current practice in \npgbench scripts is. If the command fails badly, pgbench will probably \ncomplain anyway about invalid output.\n\nMore important is that something like pg_upgrade does check the exit \nstatus when it calls pg_controldata etc. That's what this patch \naccomplishes.",
"msg_date": "Tue, 8 Nov 2022 14:14:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Check return value of pclose() correctly"
},
{
"msg_contents": "On 02.11.22 16:26, Ankit Kumar Pandey wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> Hi Peter,\n> This is a review of the pclose return value check patch:\n> \n> Contents & Purpose:\n> Purpose of this patch is to properly handle return value of pclose (indirectly, return from ClosePipeStream).\n> \n> Initial Run:\n> The patch applies cleanly to HEAD. The regression tests all pass\n> successfully against the new patch.\n> \n> Conclusion:\n> At some places pclose return value is handled and this patch adds return value check for remaining values.\n> Implementation is in sync with existing handling of pclose.\n\nCommitted. Thanks for the review.\n\n\n\n",
"msg_date": "Tue, 15 Nov 2022 15:52:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Check return value of pclose() correctly"
}
] |
[
{
"msg_contents": "While working on something else, I noticed $SUBJECT: we do not\ncurrently allow row-level triggers on partitioned tables to have\ntransition tables like this:\n\ncreate table parted_trig (a int) partition by list (a);\nCREATE TABLE\ncreate function trigger_nothing() returns trigger language plpgsql as\n$$ begin end; $$;\nCREATE FUNCTION\ncreate trigger failed after update on parted_trig referencing old\ntable as old_table for each row execute procedure trigger_nothing();\nERROR: \"parted_trig\" is a partitioned table\nDETAIL: Triggers on partitioned tables cannot have transition tables.\n\nbut the DETAIL message is confusing, because statement-level triggers\non partitioned tables *can* have transition tables.\n\nWe do not currently allow row-level triggers on partitions to have\ntransition tables either, and the error message for that is “ROW\ntriggers with transition tables are not supported on partitions.”.\nHow about changing the DETAIL message to something similar to this\nlike “ROW triggers with transition tables are not supported on\npartitioned tables.”, to avoid confusion? Patch attached. Will add\nthis to the upcoming CF.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 31 Oct 2022 18:27:53 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Error for row-level triggers with transition tables on partitioned\n tables"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> We do not currently allow row-level triggers on partitions to have\n> transition tables either, and the error message for that is “ROW\n> triggers with transition tables are not supported on partitions.”.\n> How about changing the DETAIL message to something similar to this\n> like “ROW triggers with transition tables are not supported on\n> partitioned tables.”, to avoid confusion? Patch attached. Will add\n> this to the upcoming CF.\n\n+1, this wording is better. I marked it RFC.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Nov 2022 13:20:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error for row-level triggers with transition tables on\n partitioned tables"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 2:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > We do not currently allow row-level triggers on partitions to have\n> > transition tables either, and the error message for that is “ROW\n> > triggers with transition tables are not supported on partitions.”.\n> > How about changing the DETAIL message to something similar to this\n> > like “ROW triggers with transition tables are not supported on\n> > partitioned tables.”, to avoid confusion? Patch attached. Will add\n> > this to the upcoming CF.\n>\n> +1, this wording is better. I marked it RFC.\n\nCool! I have committed the patch.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 4 Nov 2022 19:45:45 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Error for row-level triggers with transition tables on\n partitioned tables"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nWhen we take LWlock, we already use atomic CAS operation to atomically\nmodify the lock state even in the presence of concurrent lock-takers. But\nif we can not take the lock immediately, we need to put the waiters on a\nwaiting list, and currently, this operation is done not atomically and in a\ncomplicated way:\n- Parts are done under LWLockWaitListLock, which includes a spin delay to\ntake.\n- Also, we need a two-step procedure to immediately dequeue if, after\nadding a current process into a wait queue, it appears that we don’t need\nto (and can not!) sleep as the lock is already free.\n\nIf the lock state contains references to the queue head and tail, we can\nimplement a lockless queue of waiters for the LWLock. Adding new items to\nthe queue head or tail can be done with a single CAS operation (adding to\nthe tail will also require further setting the reference from the previous\ntail). Given that there could be only one lock releaser, which wakes up\nwaiters in the queue, we can handle all the concurrency issues with\nreasonable complexity.\n\nRemoving the queue spinlock bit and the corresponding contention should\ngive the performance gain on high concurrent LWLock usage scenarios.\n\nCurrently, the maximum size of procarray is 2^18 (as stated in\nbuf_internals.h), so with the use of the 64-bit LWLock state variable, we\ncan store the procarray indexes for both head and tail of the queue, all\nthe existing machinery, and even have some spare bits for future flags.\n\nThus we almost entirely avoid spinlocks and replace them with repeated try\nto change the lock state if the CAS operation is unsuccessful due to\nconcurrent state modification.\n\nThe attached patch implements described approach. The patch is based on\nthe developments of Alexander Korotkov in the OrioleDB engine. I made\nfurther adoption of those ideas with guidance from him.\n\nWe did some preliminary performance checks and saw that the concurrent\ninsert-only and txid_current tests with ~1000 connections pgbench\nsignificantly increase performance. The scripts for testing and results are\nattached. I’ve done the tests on 32-vcore x86-64 AWS machine using tmpfs as\nstorage for the database to avoid random delays related to disk IO.\n\nThough recently, Andres proposed a different patch, which just evades O(N)\nremoval from the queue of waiters but improves performance even more [1].\nThe results of the comparison of the master branch, lockless queue\n(current) patch, and Andres’ patch are below. Please, take into account\nthat the horizontal axis uses a log scale.\n\n---------------------------------------\ncat insert.sql\n\\set aid random(1, 10 * :scale)\n\\set delta random(1, 100000 * :scale)\nINSERT INTO pgbench_accounts (aid, bid, abalance) VALUES (:aid, :aid,\n:delta);\n\npgbench -d postgres -i -s 1 --unlogged-tables\necho -e\n\"max_connections=2500\\nmax_wal_senders=0\\nwal_level=minimal\\nmax_wal_size =\n10GB\\nshared_buffers = 20000MB\\nautovacuum = off\\nfsync =\noff\\nfull_page_writes = off\\nmax_worker_processes =\n1024\\nmax_parallel_workers = 1024\\n\" > ./pgdata$v/postgresql.auto.conf\npsql -dpostgres -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\n\nfor conns in 1 2 3 4 5 6 7 8 9 10 11 12 13 15 16 18 20 22 24 27 30 33 36 39\n43 47 51 56 62 68 75 82 91 100 110 120 130 150 160 180 200 220 240 270 300\n330 360 390 430 470 510 560 620 680 750 820 910 1000 1100 1200 1300 1500\n1600 1800 2000\n\ndo\npsql -dpostgres -c \"truncate pgbench_accounts;\"\npsql -dpostgres -c \"vacuum full pgbench_accounts;\"\npgbench postgres -f insert.sql -s 1 -P20 -M prepared -T 10 -j 5 -c $conns\n--no-vacuum | grep tps\ndone\n\n\n--------------------------------------------------\n\ncat txid.sql\nselect txid_current();\n\nfor conns in 1 2 3 4 5 6 7 8 9 10 11 12 13 15 16 18 20 22 24 27 30 33 36 39\n43 47 51 56 62 68 75 82 91 100 110 120 130 150 160 180 200 220 240 270 300\n330 360 390 430 470 510 560 620 680 750 820 910 1000 1100 1200 1300 1500\n1600 1800 2000\n\ndo\npgbench postgres -f txid.sql -s 1 -P20 -M prepared -T 10 -j 5 -c $conns\n--no-vacuum | grep tps\ndone\n\n\n-----------------------------------------------------\nI can not understand why the performance of a lockless queue patch has a\nminor regression in the region of around 20 connections, even when compared\nto the current master branch.\n\n\nAre there some scenarios where the lockless queue approach is superior? I\nexpected they should be, at least in theory. Probably, there is a way to\nimprove the attached patch further to achieve that superiority.\n\n\nBest regards,\nPavel Borisov,\nSupabase.\n\n[1]\nhttps://www.postgresql.org/message-id/20221027165914.2hofzp4cvutj6gin@awork3.anarazel.de",
"msg_date": "Mon, 31 Oct 2022 14:38:23 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Lockless queue of waiters in LWLock"
},
{
"msg_contents": "It seems that test results pictures failed to attach in the original email.\nI add them here.\n\nPavel Borisov",
"msg_date": "Mon, 31 Oct 2022 15:00:47 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi,\n\nThanks for working on this - I think it's something we need to improve.\n\n\nOn 2022-10-31 14:38:23 +0400, Pavel Borisov wrote:\n> If the lock state contains references to the queue head and tail, we can\n> implement a lockless queue of waiters for the LWLock. Adding new items to\n> the queue head or tail can be done with a single CAS operation (adding to\n> the tail will also require further setting the reference from the previous\n> tail). Given that there could be only one lock releaser, which wakes up\n> waiters in the queue, we can handle all the concurrency issues with\n> reasonable complexity.\n\nRight now lock releases happen *after* the lock is released. I suspect that is\nat least part of the reason for the regression you're seeing. It also looks\nlike you're going to need a substantially higher number of atomic operations -\nright now the queue processing needs O(1) atomic ops, but your approach ends\nup with O(waiters) atomic ops.\n\nI suspect it might be worth going halfway, i.e. put the list head/tail in the\natomic variable, but process the list with a lock held, after the lock is\nreleased.\n\n\n> Removing the queue spinlock bit and the corresponding contention should\n> give the performance gain on high concurrent LWLock usage scenarios.\n\n> Currently, the maximum size of procarray is 2^18 (as stated in\n> buf_internals.h), so with the use of the 64-bit LWLock state variable, we\n> can store the procarray indexes for both head and tail of the queue, all\n> the existing machinery, and even have some spare bits for future flags.\n\nAnother advantage: It shrinks the LWLock struct, which makes it cheaper to\nmake locks more granular...\n\n\n> I can not understand why the performance of a lockless queue patch has a\n> minor regression in the region of around 20 connections, even when compared\n> to the current master branch.\n\nI suspect that's the point where the additional atomic operations hurt the\nmost, while not yet providing a substantial gain.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:15:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi Andres,\n\nThank you for your feedback.\n\nOn Tue, Nov 1, 2022 at 2:15 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-31 14:38:23 +0400, Pavel Borisov wrote:\n> > If the lock state contains references to the queue head and tail, we can\n> > implement a lockless queue of waiters for the LWLock. Adding new items to\n> > the queue head or tail can be done with a single CAS operation (adding to\n> > the tail will also require further setting the reference from the previous\n> > tail). Given that there could be only one lock releaser, which wakes up\n> > waiters in the queue, we can handle all the concurrency issues with\n> > reasonable complexity.\n>\n> Right now lock releases happen *after* the lock is released.\n\nThat makes sense. The patch makes the locker hold the lock, which it's\nprocessing the queue. So, the lock is held for a longer time.\n\n> I suspect that is\n> at least part of the reason for the regression you're seeing. It also looks\n> like you're going to need a substantially higher number of atomic operations -\n> right now the queue processing needs O(1) atomic ops, but your approach ends\n> up with O(waiters) atomic ops.\n\nHmm... In the patch, queue processing calls CAS once after processing\nthe queue. There could be retries to process the queue parts, which\nwere added concurrently. But I doubt it ends up with O(waiters) atomic\nops. Pavel, I think we could gather some statistics to check how many\nretries we have on average.\n\n> I suspect it might be worth going halfway, i.e. put the list head/tail in the\n> atomic variable, but process the list with a lock held, after the lock is\n> released.\n\nGood idea. We, anyway, only allow one locker at a time to process the queue.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 1 Nov 2022 11:39:04 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, Andres,\nThank you very much for the ideas on this topic!\n\n> > > If the lock state contains references to the queue head and tail, we can\n> > > implement a lockless queue of waiters for the LWLock. Adding new items to\n> > > the queue head or tail can be done with a single CAS operation (adding to\n> > > the tail will also require further setting the reference from the previous\n> > > tail). Given that there could be only one lock releaser, which wakes up\n> > > waiters in the queue, we can handle all the concurrency issues with\n> > > reasonable complexity.\n> >\n> > Right now lock releases happen *after* the lock is released.\n>\n> That makes sense. The patch makes the locker hold the lock, which it's\n> processing the queue. So, the lock is held for a longer time.\n>\n> > I suspect that is\n> > at least part of the reason for the regression you're seeing. It also looks\n> > like you're going to need a substantially higher number of atomic operations -\n> > right now the queue processing needs O(1) atomic ops, but your approach ends\n> > up with O(waiters) atomic ops.\n>\n> Hmm... In the patch, queue processing calls CAS once after processing\n> the queue. There could be retries to process the queue parts, which\n> were added concurrently. But I doubt it ends up with O(waiters) atomic\n> ops. Pavel, I think we could gather some statistics to check how many\n> retries we have on average.\n>\n\nI've made some measurements on the number of repeated CAS operations\non lock acquire and release. (For this I applied\nPrint-lwlock-stats-on-CAS-repeats.patch onto the previous patch v1 in\nthis thread) The results when running the same insert test, that\nproduced the results in the original post on 20 connections are as\nfollows:\nlwlock ProcArray:\n-------------| locks acquired | CAS repeats to acquire | CAS repeats to release\nshared | 493187 | 57310 | 12049\nexclusive | 46816 | 42329 | 8816\nwait-until-free | - | 0\n | 76124\nblk 79473\n\n> > I suspect it might be worth going halfway, i.e. put the list head/tail in the\n> > atomic variable, but process the list with a lock held, after the lock is\n> > released.\n>\n> Good idea. We, anyway, only allow one locker at a time to process the queue.\n\nAlexander added these changes in v2 of a patch. The results of the\nsame benchmarking master vs Andres' patch vs lockless queue patch v1\nand v2 are attached. They are done in the same way as in the original\npost. The small difference is that I've gone further until 5000\nconnections and also produced both log and linear scale connections\naxis plots for the more clear demonstration.\n\nAround 20 connections TPS increased though not yet to the same value\nthe master and Andres' patches has.\nAnd in a range 300-3000 connections the v2 patch demonstrated clear gain.\n\nI'm planning to gather more detailed statistics on different\nLWLockAcquire calls soon to understand prospects for further\noptimizations.\n\nBest regards,\nPavel Borisov",
"msg_date": "Tue, 1 Nov 2022 22:52:36 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, hackers!\n> I'm planning to gather more detailed statistics on different\n> LWLockAcquire calls soon to understand prospects for further\n> optimizations.\n\nSo, I've made more measurements.\n\n1. Applied measuring patch 0001 to a patch with lockless queue\noptimization (v2) from [0] in this thread and run the same concurrent\ninsert test described in [1] on 20 pgbench connections.\nThe new results for ProcArray lwlock are as follows:\nexacq 45132 // Overall number of exclusive locks taken\nex_attempt[0] 20755 // Exclusive locks taken immediately\nex_attempt[1] 18800 // Exclusive locks taken after one waiting on semaphore\nex_attempt[2] 5577 // Exclusive locks taken after two or more\nwaiting on semaphore\nshacq 494871 // .. same stats for shared locks\nsh_attempt[0] 463211 // ..\nsh_attempt[1] 29767 // ..\nsh_attempt[2] 1893 // .. same stats for shared locks\nsh_wake_calls 31070 // Number of calls to wake up shared waiters\nsh_wakes 36190 // Number of shared waiters woken up.\nGroupClearXid 55300 // Number of calls of ProcArrayGroupClearXid\nEndTransactionInternal: 236193 // Number of calls\nProcArrayEndTransactionInternal\n\n2. Applied measuring patch 0002 to a Andres Freund's patch v3 from [2]\nand run the same concurrent insert test described in [1] on 20 pgbench\nconnections.\nThe results for ProcArray lwlock are as follows:\nexacq 49300 // Overall number of exclusive locks taken\nex_attempt1[0] 18353 // Exclusive locks taken immediately by first\ncall of LWLockAttemptLock in LWLockAcquire loop\nex_attempt2[0] 18144. // Exclusive locks taken immediately by second\ncall of LWLockAttemptLock in LWLockAcquire loop\nex_attempt1[1] 9985 // Exclusive locks taken after one waiting on\nsemaphore by first call of LWLockAttemptLock in LWLockAcquire loop\nex_attempt2[1] 1838. // Exclusive locks taken after one waiting on\nsemaphore by second call of LWLockAttemptLock in LWLockAcquire loop\nex_attempt1[2] 823. // Exclusive locks taken after two or more\nwaiting on semaphore by first call of LWLockAttemptLock in\nLWLockAcquire loop\nex_attempt2[2] 157. // Exclusive locks taken after two or more\nwaiting on semaphore by second call of LWLockAttemptLock in\nLWLockAcquire loop\nshacq 508131 // .. same stats for shared locks\nsh_attempt1[0] 469410 //..\nsh_attempt2[0] 27858. //..\nsh_attempt1[1] 10309. //..\nsh_attempt2[1] 460. //..\nsh_attempt1[2] 90. //..\nsh_attempt2[2] 4. // .. same stats for shared locks\ndequeue self 48461 // Number of dequeue_self calls\nsh_wake_calls 27560 // Number of calls to wake up\nshared waiters\nsh_wakes 19408 // Number of shared waiters woken up.\nGroupClearXid 65021. // Number of calls of\nProcArrayGroupClearXid\nEndTransactionInternal: 249003 // Number of calls\nProcArrayEndTransactionInternal\n\nIt seems that two calls in each look in Andres's (and master) code\nhelp evade semaphore-waiting loops that may be relatively expensive.\nThe probable reason for this is that the small delay between these two\ncalls is sometimes enough for concurrent takers to free spinlock for\nthe queue modification. Could we get even more performance if we do\nthree or more tries to take the lock in the queue? Will this degrade\nperformance in some other cases?\n\nOr maybe there is another explanation for now small performance\ndifference around 20 connections described in [0]?\nThoughts?\n\nRegards,\nPavel Borisov\n\n[0] https://www.postgresql.org/message-id/CALT9ZEF7q%2BSarz1MjrX-fM7OsoU7CK16%3DONoGCOkY3Efj%2BGrnw%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/CALT9ZEEz%2B%3DNepc5eti6x531q64Z6%2BDxtP3h-h_8O5HDdtkJcPw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/20221031235114.ftjkife57zil7ryw%40awork3.anarazel.de",
"msg_date": "Thu, 3 Nov 2022 14:50:11 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, Pavel!\n\nOn Thu, Nov 3, 2022 at 1:51 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > I'm planning to gather more detailed statistics on different\n> > LWLockAcquire calls soon to understand prospects for further\n> > optimizations.\n>\n> So, I've made more measurements.\n>\n> 1. Applied measuring patch 0001 to a patch with lockless queue\n> optimization (v2) from [0] in this thread and run the same concurrent\n> insert test described in [1] on 20 pgbench connections.\n> The new results for ProcArray lwlock are as follows:\n> exacq 45132 // Overall number of exclusive locks taken\n> ex_attempt[0] 20755 // Exclusive locks taken immediately\n> ex_attempt[1] 18800 // Exclusive locks taken after one waiting on semaphore\n> ex_attempt[2] 5577 // Exclusive locks taken after two or more\n> waiting on semaphore\n> shacq 494871 // .. same stats for shared locks\n> sh_attempt[0] 463211 // ..\n> sh_attempt[1] 29767 // ..\n> sh_attempt[2] 1893 // .. same stats for shared locks\n> sh_wake_calls 31070 // Number of calls to wake up shared waiters\n> sh_wakes 36190 // Number of shared waiters woken up.\n> GroupClearXid 55300 // Number of calls of ProcArrayGroupClearXid\n> EndTransactionInternal: 236193 // Number of calls\n> ProcArrayEndTransactionInternal\n>\n> 2. Applied measuring patch 0002 to a Andres Freund's patch v3 from [2]\n> and run the same concurrent insert test described in [1] on 20 pgbench\n> connections.\n> The results for ProcArray lwlock are as follows:\n> exacq 49300 // Overall number of exclusive locks taken\n> ex_attempt1[0] 18353 // Exclusive locks taken immediately by first\n> call of LWLockAttemptLock in LWLockAcquire loop\n> ex_attempt2[0] 18144. // Exclusive locks taken immediately by second\n> call of LWLockAttemptLock in LWLockAcquire loop\n> ex_attempt1[1] 9985 // Exclusive locks taken after one waiting on\n> semaphore by first call of LWLockAttemptLock in LWLockAcquire loop\n> ex_attempt2[1] 1838. // Exclusive locks taken after one waiting on\n> semaphore by second call of LWLockAttemptLock in LWLockAcquire loop\n> ex_attempt1[2] 823. // Exclusive locks taken after two or more\n> waiting on semaphore by first call of LWLockAttemptLock in\n> LWLockAcquire loop\n> ex_attempt2[2] 157. // Exclusive locks taken after two or more\n> waiting on semaphore by second call of LWLockAttemptLock in\n> LWLockAcquire loop\n> shacq 508131 // .. same stats for shared locks\n> sh_attempt1[0] 469410 //..\n> sh_attempt2[0] 27858. //..\n> sh_attempt1[1] 10309. //..\n> sh_attempt2[1] 460. //..\n> sh_attempt1[2] 90. //..\n> sh_attempt2[2] 4. // .. same stats for shared locks\n> dequeue self 48461 // Number of dequeue_self calls\n> sh_wake_calls 27560 // Number of calls to wake up\n> shared waiters\n> sh_wakes 19408 // Number of shared waiters woken up.\n> GroupClearXid 65021. // Number of calls of\n> ProcArrayGroupClearXid\n> EndTransactionInternal: 249003 // Number of calls\n> ProcArrayEndTransactionInternal\n>\n> It seems that two calls in each look in Andres's (and master) code\n> help evade semaphore-waiting loops that may be relatively expensive.\n> The probable reason for this is that the small delay between these two\n> calls is sometimes enough for concurrent takers to free spinlock for\n> the queue modification. Could we get even more performance if we do\n> three or more tries to take the lock in the queue? Will this degrade\n> performance in some other cases?\n\nThank you for gathering the statistics. Let me do some relative\nanalysis of that.\n\n*Lockless queue patch*\n\n1. Shared lockers\n1.1. 93.60% of them acquire lock without sleeping on semaphore\n1.2. 6.02% of them acquire lock after sleeping on semaphore 1 time\n1.3. 0.38% of them acquire lock after sleeping on semaphore 2 or more times\n2. Exclusive lockers\n2.1. 45.99% of them acquire lock without sleeping on semaphore\n2.2. 41.66% of them acquire lock after sleeping on semaphore 1 time\n2.3. 12.36% of them acquire lock after sleeping on semaphore 2 or more times\n\nIn general, about 10% of lockers sleep on the semaphore.\n\n*Andres's patch*\n\n1. Shared lockers\n1.1. 97.86% of them acquire lock without sleeping on the semaphore\n(92.38% do this immediately and 5.48% after queuing)\n1.2. 2.12% of them acquire lock after sleeping on semaphore 1 time\n(2.03% do this immediately and 0.09% after queuing)\n1.3. 0.02% of them acquire lock after sleeping on semaphore 2 or more\ntimes (0.02% do this immediately and 0.00% after queuing)\n2. Exclusive lockers\n2.1. 74.03% of them acquire lock without sleeping on the semaphore\n(37.23% do this immediately and 36.80% after queuing)\n2.2. 23.98% of them acquire lock after sleeping on semaphore 1 time\n(20.25% do this immediately and 3.73% after queuing)\n2.3. 1.99% of them acquire lock after sleeping on semaphore 2 or more\ntimes (1.67% do this immediately and 0.32% after queuing)\n\nIn general, about 4% of lockers sleep on the semaphore.\n\nI agree with Pavel that the reason for the regression of the lockless\nqueue patch seems to be sleeping on semaphores. The lockless queue\npatch seems to give only one chance to the lockers to acquire the lock\nwithout such sleeping. But the current LWLock code gives two such\nchances: before queuing and after queuing. LWLockWaitListLock()\nincludes perform_spin_delay(), which may call pg_usleep(). So, the\nsecond attempt to acquire the lock may have significant changes (we\nsee almost the same percentage for exclusive lockers!).\n\nPavel, could you also measure the average time LWLockWaitListLock()\nspends with pg_usleep()?\n\nIt's a bit discouraging that sleeping on semaphores is so slow that\neven manual fixed-time sleeping is faster. I'm not sure if this is the\nissue of semaphores or the multi-process model. If we don't change\nthis, then let's try with multiple lock tries as Pavel proposed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 4 Nov 2022 15:35:44 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-03 14:50:11 +0400, Pavel Borisov wrote:\n> Or maybe there is another explanation for now small performance\n> difference around 20 connections described in [0]?\n> Thoughts?\n\nUsing xadd is quite a bit cheaper than cmpxchg, and now every lock release\nuses a compare-exchange, I think.\n\nIn the past I had a more complicated version of LWLockAcquire which tried to\nuse an xadd to acquire locks. IIRC (and this is long enough ago that I might\nnot) that proved to be a benefit, but I was worried about the complexity. And\njust getting in the version that didn't always use a spinlock was the higher\npriority.\n\nThe use of cmpxchg vs lock inc/lock add/xadd is one of the major reasons why\nlwlocks are slower than a spinlock (but obviously are better under contention\nnonetheless).\n\n\nI have a benchmark program that starts a thread for each physical core and\njust increments a counter on an atomic value.\n\nOn my dual Xeon Gold 5215 workstation:\n\ncmpxchg:\n32: throughput per thread: 0.55M/s, total: 11.02M/s\n64: throughput per thread: 0.63M/s, total: 12.68M/s\n\nlock add:\n32: throughput per thread: 2.10M/s, total: 41.98M/s\n64: throughput per thread: 2.12M/s, total: 42.40M/s\n\nxadd:\n32: throughput per thread: 2.10M/s, total: 41.91M/s\n64: throughput per thread: 2.04M/s, total: 40.71M/s\n\n\nand even when there's no contention, every thread just updating its own\ncacheline:\n\ncmpxchg:\n32: throughput per thread: 88.83M/s, total: 1776.51M/s\n64: throughput per thread: 96.46M/s, total: 1929.11M/s\n\nlock add:\n32: throughput per thread: 166.07M/s, total: 3321.31M/s\n64: throughput per thread: 165.86M/s, total: 3317.22M/s\n\nadd (no lock):\n32: throughput per thread: 530.78M/s, total: 10615.62M/s\n64: throughput per thread: 531.22M/s, total: 10624.35M/s\n\nxadd:\n32: throughput per thread: 165.88M/s, total: 3317.51M/s\n64: throughput per thread: 165.93M/s, total: 3318.53M/s\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 12:07:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, Andres!\n\nOn Fri, Nov 4, 2022 at 10:07 PM Andres Freund <andres@anarazel.de> wrote:\n> The use of cmpxchg vs lock inc/lock add/xadd is one of the major reasons why\n> lwlocks are slower than a spinlock (but obviously are better under contention\n> nonetheless).\n>\n>\n> I have a benchmark program that starts a thread for each physical core and\n> just increments a counter on an atomic value.\n\nThank you for this insight! I didn't know xadd is much cheaper than\ncmpxchg unless there are retries. I also wonder how cmpxchg becomes\nfaster with higher concurrency.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sat, 5 Nov 2022 12:05:43 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-05 12:05:43 +0300, Alexander Korotkov wrote:\n> On Fri, Nov 4, 2022 at 10:07 PM Andres Freund <andres@anarazel.de> wrote:\n> > The use of cmpxchg vs lock inc/lock add/xadd is one of the major reasons why\n> > lwlocks are slower than a spinlock (but obviously are better under contention\n> > nonetheless).\n> >\n> >\n> > I have a benchmark program that starts a thread for each physical core and\n> > just increments a counter on an atomic value.\n> \n> Thank you for this insight! I didn't know xadd is much cheaper than\n> cmpxchg unless there are retries.\n\nThe magnitude of the effect is somewhat surprising, I agree. Some difference\nmakes sense to me, but...\n\n\n> I also wonder how cmpxchg becomes faster with higher concurrency.\n\nIf you're referring to the leading 32/64 that's not concurrency, that's\n32/64bit values. Sorry for not being clearer on that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Nov 2022 12:32:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, Andres, and Alexander!\n\nI've done some more measurements to check the hypotheses regarding the\nperformance of a previous patch v2, and explain the results of tests\nin [1].\n\nThe results below are the same (tps vs connections) plots as in [1],\nand the test is identical to the insert test in this thread [2].\nAdditionally, in each case, there is a plot with results relative to\nAndres Freund's patch [3]. Log plots are good for seeing details in\nthe range of 20-30 connections, but they somewhat hide the fact that\nthe effect in the range of 500+ connections is much more significant\noverall, so I'd recommend looking at the linear plots as well.\n\nThe particular tests:\n\n1. Whether CAS operation cost in comparison to the atomic-sub affects\nperformance.\nThe patch (see attached\natomic-sub-instead-of-CAS-for-shared-lockers-without.patch) uses\natomic-sub to change LWlock state variable for releasing a shared lock\nthat doesn't have a waiting queue.\nThe results are attached (see atomic-sub-inverted-queue-*.jpg)\n\n2. Whether sending wake signals in inverted order affects performance.\nIn patch v2 the most recent shared lockers taken are woken up first\n(but anyway they all get wake signals at the same wake pass).\nThe patch (see attached Invert-wakeup-queue-order.patch) inverts the\nwake queue of lockers so the last lockers come first to wake.\nThe results are attached to the same plot as the previous test (see\natomic-sub-inverted-queue-*.jpg)\n\nIt seems that 1 and 2 don't have a visible effect on performance.\n\n3. In the original master patch lock taking is tried twice separated\nby the cheap spin delay before waiting on a more expensive semaphore.\nThis might explain why the proposed lockless queue patch [1] has a\nlittle performance degradation around 20-30 connections. While we\ndon't need two-step lock-taking anymore I've tried to add it to see\nwhether it can improve performance. I've tried different initial\nspinlock delays from 1 microsecond to 100 milliseconds and 1 or 2\nretries (in case the lock is still busy in each case. (see the\nattached: Extra-spin-waits-for-the-lock-to-be-freed-by-concurr.patch)\n\nThe results attached (see add-spin-delays-*.jpg) are interesting.\nIndeed second attempt to take lock after the spin delay will increase\nperformance in any combinations of delays and retries. Also, the delay\nand the number of retries act in opposite directions to the regions of\n20-30 connections and 500+ connections. So we can choose to have a\nmore even performance gain at any number of connections (e.g. 2\nretries x 500 microseconds) or better performance at 500+ connections\nand the same performance as in Andres's patch around 20-30 connections\n(e.g. 1 retry x 100 milliseconds).\n\n4. I've also collected some statistics for the overall (sum for all\nbackends) number and duration of spin-delays in the same test in\nAndres Freund's patch gathered using a slightly modified LWLOCK_STATS\nmechanism.\nconns / overall spin delays, s / overall number of spin delays /\naverage time of spin delay, ms\n20 / 0 / 0 / 0\n100 / 1.9 / 1177 / 1.6\n200 / 21.9 / 14833 / 1.5\n1000 / 12.9 / 11909 / 1.1\n2000 / 3.2 / 2297 / 1.4\n\nI've attached v3 of the lockless queue patch with added 2 retries x\n500 microseconds spin delay which makes the test results superior to\nthe existing patch, leaves no performance degradation, and with steady\nperformance gain in the whole range of connections. But it's surely\nworth discussing which parameters we want to have in the production\npatch.\n\nI'm also planning to do the same tests on an ARM server when the free\none comes available to me.\nThoughts?\n\nRegards,\nPavel Borisov.\n\n[1] https://www.postgresql.org/message-id/CALT9ZEF7q%2BSarz1MjrX-fM7OsoU7CK16%3DONoGCOkY3Efj%2BGrnw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CALT9ZEEz%2B%3DNepc5eti6x531q64Z6%2BDxtP3h-h_8O5HDdtkJcPw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/20221031235114.ftjkife57zil7ryw%40awork3.anarazel.de",
"msg_date": "Fri, 11 Nov 2022 15:39:05 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "CFbot isn't happy because of additional patches in the previous\nmessage, so here I attach v3 patch alone.\n\nRegards,\nPavel",
"msg_date": "Fri, 11 Nov 2022 16:39:10 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, Pavel!\n\nOn Fri, Nov 11, 2022 at 2:40 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I've done some more measurements to check the hypotheses regarding the\n> performance of a previous patch v2, and explain the results of tests\n> in [1].\n>\n> The results below are the same (tps vs connections) plots as in [1],\n> and the test is identical to the insert test in this thread [2].\n> Additionally, in each case, there is a plot with results relative to\n> Andres Freund's patch [3]. Log plots are good for seeing details in\n> the range of 20-30 connections, but they somewhat hide the fact that\n> the effect in the range of 500+ connections is much more significant\n> overall, so I'd recommend looking at the linear plots as well.\n\nThank you for doing all the experiments!\n\nBTW, sometimes it's hard to distinguish so many lines on a jpg\npicture. Could I ask you to post the same graphs in png and also post\nraw data in csv format?\n\n> I'm also planning to do the same tests on an ARM server when the free\n> one comes available to me.\n> Thoughts?\n\nARM tests should be great. We definitely need to check this on more\nthan just one architecture. Please, check with and without LSE\ninstructions. They could lead to dramatic speedup [1]. Although,\nmost of precompiled binaries are distributed without them. So, both\ncases seems important to me so far.\n\n From what we have so far, I think we could try combine the multiple\nstrategies to achieve the best result. 2x1ms is one of the leaders\nbefore ~200 connections, and 1x1ms is once of the leaders after. We\ncould implement simple heuristics to switch between 1 and 2 retries\nsimilar to what we have to spin delays. But let's have ARM results\nfirst.\n\n\nLinks\n1. https://akorotkov.github.io/blog/2021/04/30/arm/\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 11 Nov 2022 17:41:59 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, Alexander!\n\n> BTW, sometimes it's hard to distinguish so many lines on a jpg\n> picture. Could I ask you to post the same graphs in png and also post\n> raw data in csv format?\nHere are the same pictures in png format and raw data attached.\n\nRegards,\nPavel.",
"msg_date": "Fri, 11 Nov 2022 19:16:29 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, hackers!\n\nI've noticed that alignment requirements for using\npg_atomic_init_u64() for LWlock state (there's an assertion that it's\naligned at 8 bytes) do not correspond to the code in SimpleLruInit()\non 32-bit arch when MAXIMUM_ALIGNOF = 4.\nFixed this in v4 patch (PFA).\n\nRegards,\nPavel.",
"msg_date": "Thu, 17 Nov 2022 16:07:37 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, hackers!\nAndres Freund recently committed his nice LWLock optimization\na4adc31f6902f6f. So I've rebased the patch on top of the current\nmaster (PFA v5).\n\nRegards,\nPavel Borisov,\nSupabase.",
"msg_date": "Thu, 24 Nov 2022 16:20:21 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "Hi, hackers!\nIn the measurements above in the thread, I've been using LIFO wake\nqueue in a primary lockless patch (and it was attached as the previous\nversions of a patch) and an \"inverted wake queue\" (in faсt FIFO) as\nthe alternative benchmarking option. I think using the latter is more\nfair and natural and provided they show no difference in the speed,\nI'd make the main patch using it (attached as v6). No other changes\nfrom v5, though.\n\nRegards,\nPavel.",
"msg_date": "Fri, 25 Nov 2022 22:52:54 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "On Sat, 26 Nov 2022 at 00:24, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>\n> Hi, hackers!\n> In the measurements above in the thread, I've been using LIFO wake\n> queue in a primary lockless patch (and it was attached as the previous\n> versions of a patch) and an \"inverted wake queue\" (in faсt FIFO) as\n> the alternative benchmarking option. I think using the latter is more\n> fair and natural and provided they show no difference in the speed,\n> I'd make the main patch using it (attached as v6). No other changes\n> from v5, though.\n\nThere has not been much interest on this as the thread has been idle\nfor more than a year now. I'm not sure if we should take it forward or\nnot. I would prefer to close this in the current commitfest unless\nsomeone wants to take it further.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 07:28:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
},
{
"msg_contents": "On Sat, 20 Jan 2024 at 07:28, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, 26 Nov 2022 at 00:24, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> >\n> > Hi, hackers!\n> > In the measurements above in the thread, I've been using LIFO wake\n> > queue in a primary lockless patch (and it was attached as the previous\n> > versions of a patch) and an \"inverted wake queue\" (in faсt FIFO) as\n> > the alternative benchmarking option. I think using the latter is more\n> > fair and natural and provided they show no difference in the speed,\n> > I'd make the main patch using it (attached as v6). No other changes\n> > from v5, though.\n>\n> There has not been much interest on this as the thread has been idle\n> for more than a year now. I'm not sure if we should take it forward or\n> not. I would prefer to close this in the current commitfest unless\n> someone wants to take it further.\n\nI have returned this patch in commitfest as nobody had shown any\ninterest in pursuing it. Feel free to add a new entry when someone\nwants to work on this more actively.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 20:23:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lockless queue of waiters in LWLock"
}
] |
[
{
"msg_contents": "Hi,\n\nIn the commitfest application, I was wondering today what was the exact meaning\nand difference between open/closed status (is it only for the current\ncommitfest?) and between «waiting for author» and «Returned with feedback».\n\nI couldn't find a clear definition searching the wiki, the mailing list (too\nmuch unrelated results) or in the app itself.\n\nMaybe the commitfest home page/menu could link to some documentation hosted in\nthe wiki? Eg. in https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n\nThoughts? Pointers I missed?\n\nRegards,\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:59:54 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Commitfest documentation"
},
{
"msg_contents": "Hi Jehan-Guillaume,\n\n> In the commitfest application, I was wondering today what was the exact meaning\n> and difference between open/closed status (is it only for the current\n> commitfest?)\n\nClosed means that the CF was in the past. It is archived now. Open\nmeans that new patches are accepted to the given CF. If memory serves,\nwhen the CF starts the status changes to \"In Progress\".\n\nThere are five CFs a year: in January, March, July, September, and\nNovember. November one is about to start.\n\n> and between «waiting for author» and «Returned with feedback».\n\nRwF is almost the same as \"Rejected\". It means that some feedback was\nprovided for the patch and the community wouldn't mind accepting a new\npatch when and if this feedback will be accounted for.\n\nWfA means that the patch awaits some (relatively small) actions from\nthe author. Typically it happens after another round of code review.\n\nAttached is a (!) simplified diagram of a typical patch livecycle.\nHopefully it will help a bit.\n\n> I couldn't find a clear definition searching the wiki, the mailing list (too\n> much unrelated results) or in the app itself.\n\nYes, this could be a tribe knowledge to a certain degree at the\nmoment. On the flip side this is also an opportunity to contribute an\narticle to the Wiki.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 31 Oct 2022 16:51:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest documentation"
},
{
"msg_contents": "Hi Aleksander,\n\nThank you for your help!\n \nOn Mon, 31 Oct 2022 16:51:23 +0300\nAleksander Alekseev <aleksander@timescale.com> wrote:\n[...]\n> > In the commitfest application, I was wondering today what was the exact\n> > meaning and difference between open/closed status (is it only for the\n> > current commitfest?) \n> \n> Closed means that the CF was in the past. It is archived now. Open\n> means that new patches are accepted to the given CF. If memory serves,\n> when the CF starts the status changes to \"In Progress\".\n\nSorry, I was asking from a patch point of view, not the whole commitfest. If\nyou look at the \"Change Status\" list on a patch page, there's two sublist\noptions: \"Open statuses\" and \"Closed statuses\". But your answer below answered\nthe question anyway.\n\n> There are five CFs a year: in January, March, July, September, and\n> November. November one is about to start.\n\nThis detail might have a place in the following page:\nhttps://wiki.postgresql.org/wiki/CommitFest\n\nBut I'm not sure it's really worthy?\n\n> > and between «waiting for author» and «Returned with feedback». \n> \n> RwF is almost the same as \"Rejected\". It means that some feedback was\n> provided for the patch and the community wouldn't mind accepting a new\n> patch when and if this feedback will be accounted for.\n> \n> WfA means that the patch awaits some (relatively small) actions from\n> the author. Typically it happens after another round of code review.\n\nThank you for the disambiguation. Here is a proposal for all statuses:\n\n * Needs review: Wait for a new review.\n * WfA : the patch awaits some (relatively small) actions from\n the author, typically after another round of code review.\n * Ready fC : No more comment from reviewer. The code is ready for a\n commiter review.\n * Rejected : The code is rejected. The community is not willing to accept\n new patch about $subject.\n * Withdraw : The author decide to remove its patch from the commit fest.\n * Returned wF : Some feedback was provided for the patch and the community\n wouldn't mind accepting a new patch when and if this feedback\n will be accounted for.\n * Move next CF: The patch is still waiting for the author, the reviewers or a\n commiter at the end of the current CF.\n * Committed : The patch as been committed.\n\n> Attached is a (!) simplified diagram of a typical patch livecycle.\n> Hopefully it will help a bit.\n\nIt misses a Withdraw box :)\nI suppose it is linked from the Waiting on Author.\n\n> > I couldn't find a clear definition searching the wiki, the mailing list (too\n> > much unrelated results) or in the app itself. \n> \n> Yes, this could be a tribe knowledge to a certain degree at the\n> moment. On the flip side this is also an opportunity to contribute an\n> article to the Wiki.\n\nI suppose these definitions might go in:\nhttps://wiki.postgresql.org/wiki/Reviewing_a_Patch\n\nHowever, I'm not strictly sure who is responsible to set these statuses. The\nreviewer? The author? The commiter? The CF manager? I bet on the reviewer, but\nit seems weird a random reviewer can reject a patch on its own behalf.\n\nRegards,\n\n\n",
"msg_date": "Mon, 31 Oct 2022 16:18:03 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest documentation"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 8:18 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> However, I'm not strictly sure who is responsible to set these statuses. The\n> reviewer? The author? The commiter? The CF manager? I bet on the reviewer, but\n> it seems weird a random reviewer can reject a patch on its own behalf.\n\nHere's my current understanding (jump in and correct as needed):\n\nNeeds Review: If a patch is Waiting on Author, and then the author\nresponds to the requested feedback and would like additional review,\nthey can bump the patch back to this state. This can also be done by a\nreviewer, or by the CFM, if the author forgets.\n\nWaiting on Author: This is set by a reviewer when they believe a\nresponse is necessary for the process to continue for a patch. Some\npeople set it immediately upon sending a request; others wait a few\ndays to keep the administrative overhead down. A CFM might put a patch\ninto this state if a reviewer forgets and the thread has been hanging\nopen for a while. (They should probably ping the thread at the same\ntime.)\n\nReady for Committer: A reviewer (or a CFM) puts a patch into this\nstate once they think the patchset is ready. Authors typically should\nnot put their own patches into this state unless there is already\ngeneral agreement on the list that it should be there.\n\nRejected: This status doesn't actually happen very often due to its\n\"final\" nature. An individual reviewer should usually not decide this\nunilaterally; propose rejection and wait for general agreement, or\nwait for a CFM or a committer to come along.\n\nReturned with Feedback: A CFM will typically set this at the end of a\nCF. An author may preemptively do it as well, to \"pause\" review for\nthe entry while they work on it for a future CF.\n\nMoved to Next CF: A CFM does this at the end of a CF, or an author\ndoes it voluntarily.\n\nWithdrawn: An author does this voluntarily to their own entry.\n\nCommitted: The committer or CFM does this.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 31 Oct 2022 11:02:10 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest documentation"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to split index and table \nstatistics into different types of stats.\n\nThis idea has been proposed by Andres in a couple of threads, see [1] \nand [2].\n\nTo sum up:\n\nWe currently track index and table types of statistics in the same \nformat (so that a number of the \"columns\" in index stats are currently \nunused) and we rename column in views etc to make them somewhat sensible.\n\nSo that the immediate benefits to $SUBJECT are:\n\n- have reasonable names for the fields\n- shrink the current memory usage\n\nThe attached patch proposal:\n\n- renames PGSTAT_KIND_RELATION to PGSTAT_KIND_TABLE\n- creates a new PGSTAT_KIND_INDEX\n- creates new macros: pgstat_count_index_fetch(), \npgstat_count_index_scan(), pgstat_count_index_tuples(), \npgstat_count_index_buffer_read() and pgstat_count_index_buffer_hit() to \nincrement the indexes related stats\n- creates new SQL callable functions dedicated to the indexes that are \nused in system_views.sql\n\nIt also adds basic tests in src/test/regress/sql/stats.sql for toast and \npartitions (we may want to create a dedicated patch for those additional \ntests though).\n\nThe fields renaming has not been done to ease the reading of this patch \n(I think it would be better to create a dedicated patch for the renaming \nonce the split is done).\n\nI'm adding a new CF entry for this patch.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n[1]: \nhttps://www.postgresql.org/message-id/flat/20221019181930.bx73kul4nbiftr65%40awork3.anarazel.de\n\n[2]: \nhttps://www.postgresql.org/message-id/20220818195124.c7ipzf6c5v7vxymc%40awork3.anarazel.de",
"msg_date": "Mon, 31 Oct 2022 14:14:15 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 10/31/22 2:31 PM, Justin Pryzby wrote:\n> I didn't looks closely, but there's a couple places where you wrote\n> \";;\", which looks unintentional.\n> \n> - PG_RETURN_TIMESTAMPTZ(tabentry->lastscan);\n> + PG_RETURN_TIMESTAMPTZ(tabentry->lastscan);;\n> \n\nThanks for looking at it!\noops, thanks for the keen eyes ;-) Fixed in v2 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 31 Oct 2022 17:42:13 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 14:14:15 +0100, Drouvot, Bertrand wrote:\n> Please find attached a patch proposal to split index and table statistics\n> into different types of stats.\n> \n> This idea has been proposed by Andres in a couple of threads, see [1] and\n> [2].\n\nThanks for working on this!\n\n\n\n> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> index 5b49cc5a09..8a715db82e 100644\n> --- a/src/backend/catalog/heap.c\n> +++ b/src/backend/catalog/heap.c\n> @@ -1853,7 +1853,7 @@ heap_drop_with_catalog(Oid relid)\n> \t\tRelationDropStorage(rel);\n> \n> \t/* ensure that stats are dropped if transaction commits */\n> -\tpgstat_drop_relation(rel);\n> +\tpgstat_drop_heap(rel);\n\nI don't think \"heap\" is a good name for these, even if there's some historical\nreasons for it. Particularly because you used \"table\" in some bits and pieces\ntoo.\n\n\n> /*\n> @@ -168,39 +210,55 @@ pgstat_unlink_relation(Relation rel)\n> void\n> pgstat_create_relation(Relation rel)\n> {\n> -\tpgstat_create_transactional(PGSTAT_KIND_RELATION,\n> -\t\t\t\t\t\t\t\trel->rd_rel->relisshared ? InvalidOid : MyDatabaseId,\n> -\t\t\t\t\t\t\t\tRelationGetRelid(rel));\n> +\tif (rel->rd_rel->relkind == RELKIND_INDEX)\n> +\t\tpgstat_create_transactional(PGSTAT_KIND_INDEX,\n> +\t\t\t\t\t\t\t\t\trel->rd_rel->relisshared ? InvalidOid : MyDatabaseId,\n> +\t\t\t\t\t\t\t\t\tRelationGetRelid(rel));\n> +\telse\n> +\t\tpgstat_create_transactional(PGSTAT_KIND_TABLE,\n> +\t\t\t\t\t\t\t\t\trel->rd_rel->relisshared ? InvalidOid : MyDatabaseId,\n> +\t\t\t\t\t\t\t\t\tRelationGetRelid(rel));\n> +}\n\nHm - why is this best handled on this level, rather than at the caller?\n\n\n> +/*\n> + * Support function for the SQL-callable pgstat* functions. Returns\n> + * the collected statistics for one index or NULL. NULL doesn't mean\n> + * that the index doesn't exist, just that there are no statistics, so the\n> + * caller is better off to report ZERO instead.\n> + */\n> +PgStat_StatIndEntry *\n> +pgstat_fetch_stat_indentry(Oid relid)\n> +{\n> +\tPgStat_StatIndEntry *indentry;\n> +\n> +\tindentry = pgstat_fetch_stat_indentry_ext(false, relid);\n> +\tif (indentry != NULL)\n> +\t\treturn indentry;\n> +\n> +\t/*\n> +\t * If we didn't find it, maybe it's a shared index.\n> +\t */\n> +\tindentry = pgstat_fetch_stat_indentry_ext(true, relid);\n> +\treturn indentry;\n> +}\n> +\n> +/*\n> + * More efficient version of pgstat_fetch_stat_indentry(), allowing to specify\n> + * whether the to-be-accessed index is shared or not.\n> + */\n> +PgStat_StatIndEntry *\n> +pgstat_fetch_stat_indentry_ext(bool shared, Oid reloid)\n> +{\n> +\tOid\t\t\tdboid = (shared ? InvalidOid : MyDatabaseId);\n> +\n> +\treturn (PgStat_StatIndEntry *)\n> +\t\tpgstat_fetch_entry(PGSTAT_KIND_INDEX, dboid, reloid);\n> }\n\nDo we need this split anywhere for now? I suspect not, the table case is\nmainly for the autovacuum launcher, which won't look at indexes \"in isolation\".\n\n\n\n> @@ -240,9 +293,23 @@ pg_stat_get_blocks_fetched(PG_FUNCTION_ARGS)\n> \tPG_RETURN_INT64(result);\n> }\n> \n> +Datum\n> +pg_stat_get_index_blocks_fetched(PG_FUNCTION_ARGS)\n> +{\n> +\tOid\t\t\trelid = PG_GETARG_OID(0);\n> +\tint64\t\tresult;\n> +\tPgStat_StatIndEntry *indentry;\n> +\n> +\tif ((indentry = pgstat_fetch_stat_indentry(relid)) == NULL)\n> +\t\tresult = 0;\n> +\telse\n> +\t\tresult = (int64) (indentry->blocks_fetched);\n> +\n> +\tPG_RETURN_INT64(result);\n> +}\n\nWe have so many copies of this by now - perhaps we first should deduplicate\nthem somehow? Even if it's just a macro or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:30:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/1/22 1:30 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2022-10-31 14:14:15 +0100, Drouvot, Bertrand wrote:\n>> Please find attached a patch proposal to split index and table statistics\n>> into different types of stats.\n>>\n>> This idea has been proposed by Andres in a couple of threads, see [1] and\n>> [2].\n> \n> Thanks for working on this!\n> \n\nThanks for looking at it!\n\n> \n> \n>> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n>> index 5b49cc5a09..8a715db82e 100644\n>> --- a/src/backend/catalog/heap.c\n>> +++ b/src/backend/catalog/heap.c\n>> @@ -1853,7 +1853,7 @@ heap_drop_with_catalog(Oid relid)\n>> \t\tRelationDropStorage(rel);\n>> \n>> \t/* ensure that stats are dropped if transaction commits */\n>> -\tpgstat_drop_relation(rel);\n>> +\tpgstat_drop_heap(rel);\n> \n> I don't think \"heap\" is a good name for these, even if there's some historical\n> reasons for it. Particularly because you used \"table\" in some bits and pieces\n> too.\n> \n\nAgree, replaced by \"table\" where appropriate in V3 attached.\n\n> \n>> /*\n>> @@ -168,39 +210,55 @@ pgstat_unlink_relation(Relation rel)\n>> void\n>> pgstat_create_relation(Relation rel)\n>> {\n>> -\tpgstat_create_transactional(PGSTAT_KIND_RELATION,\n>> -\t\t\t\t\t\t\t\trel->rd_rel->relisshared ? InvalidOid : MyDatabaseId,\n>> -\t\t\t\t\t\t\t\tRelationGetRelid(rel));\n>> +\tif (rel->rd_rel->relkind == RELKIND_INDEX)\n>> +\t\tpgstat_create_transactional(PGSTAT_KIND_INDEX,\n>> +\t\t\t\t\t\t\t\t\trel->rd_rel->relisshared ? InvalidOid : MyDatabaseId,\n>> +\t\t\t\t\t\t\t\t\tRelationGetRelid(rel));\n>> +\telse\n>> +\t\tpgstat_create_transactional(PGSTAT_KIND_TABLE,\n>> +\t\t\t\t\t\t\t\t\trel->rd_rel->relisshared ? InvalidOid : MyDatabaseId,\n>> +\t\t\t\t\t\t\t\t\tRelationGetRelid(rel));\n>> +}\n> \n> Hm - why is this best handled on this level, rather than at the caller?\n> \n> \n\nAgree that it should be split in \npgstat_create_table()/pgstat_create_index() (also as it was already \nsplit for the \"drop\" case): done in V3.\n\n>> +/*\n>> + * Support function for the SQL-callable pgstat* functions. Returns\n>> + * the collected statistics for one index or NULL. NULL doesn't mean\n>> + * that the index doesn't exist, just that there are no statistics, so the\n>> + * caller is better off to report ZERO instead.\n>> + */\n>> +PgStat_StatIndEntry *\n>> +pgstat_fetch_stat_indentry(Oid relid)\n>> +{\n>> +\tPgStat_StatIndEntry *indentry;\n>> +\n>> +\tindentry = pgstat_fetch_stat_indentry_ext(false, relid);\n>> +\tif (indentry != NULL)\n>> +\t\treturn indentry;\n>> +\n>> +\t/*\n>> +\t * If we didn't find it, maybe it's a shared index.\n>> +\t */\n>> +\tindentry = pgstat_fetch_stat_indentry_ext(true, relid);\n>> +\treturn indentry;\n>> +}\n>> +\n>> +/*\n>> + * More efficient version of pgstat_fetch_stat_indentry(), allowing to specify\n>> + * whether the to-be-accessed index is shared or not.\n>> + */\n>> +PgStat_StatIndEntry *\n>> +pgstat_fetch_stat_indentry_ext(bool shared, Oid reloid)\n>> +{\n>> +\tOid\t\t\tdboid = (shared ? InvalidOid : MyDatabaseId);\n>> +\n>> +\treturn (PgStat_StatIndEntry *)\n>> +\t\tpgstat_fetch_entry(PGSTAT_KIND_INDEX, dboid, reloid);\n>> }\n> \n> Do we need this split anywhere for now? I suspect not, the table case is\n> mainly for the autovacuum launcher, which won't look at indexes \"in isolation\".\n> \n\nYes I think so as pgstat_fetch_stat_indentry_ext() has its use case in \npgstat_copy_index_stats() (previously pgstat_copy_relation_stats()).\n\n> \n> \n>> @@ -240,9 +293,23 @@ pg_stat_get_blocks_fetched(PG_FUNCTION_ARGS)\n>> \tPG_RETURN_INT64(result);\n>> }\n>> \n>> +Datum\n>> +pg_stat_get_index_blocks_fetched(PG_FUNCTION_ARGS)\n>> +{\n>> +\tOid\t\t\trelid = PG_GETARG_OID(0);\n>> +\tint64\t\tresult;\n>> +\tPgStat_StatIndEntry *indentry;\n>> +\n>> +\tif ((indentry = pgstat_fetch_stat_indentry(relid)) == NULL)\n>> +\t\tresult = 0;\n>> +\telse\n>> +\t\tresult = (int64) (indentry->blocks_fetched);\n>> +\n>> +\tPG_RETURN_INT64(result);\n>> +}\n> \n> We have so many copies of this by now - perhaps we first should deduplicate\n> them somehow? Even if it's just a macro or such.\n> \n\nYeah good point, a new macro has been defined for the \"int64\" return \ncase in V3 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 2 Nov 2022 09:58:36 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi Bertrand,\n\nI'm glad you are working on this.\n\nI had a few thoughts/ideas\n\nIt seems better to have all of the counts in the various stats structs\nnot be prefixed with n_, i_, t_\n\ntypedef struct PgStat_StatDBEntry\n{\n...\n PgStat_Counter n_blocks_fetched;\n PgStat_Counter n_blocks_hit;\n PgStat_Counter n_tuples_returned;\n PgStat_Counter n_tuples_fetched;\n...\n\nI've attached a patch (0002) to change this in case you are interested\nin making such a change (I've attached all of my suggestions in patches\nalong with your original patch so that cfbot still passes).\n\nOn Wed, Nov 2, 2022 at 5:00 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> On 11/1/22 1:30 AM, Andres Freund wrote:\n> > On 2022-10-31 14:14:15 +0100, Drouvot, Bertrand wrote:\n> >> @@ -240,9 +293,23 @@ pg_stat_get_blocks_fetched(PG_FUNCTION_ARGS)\n> >> PG_RETURN_INT64(result);\n> >> }\n> >>\n> >> +Datum\n> >> +pg_stat_get_index_blocks_fetched(PG_FUNCTION_ARGS)\n> >> +{\n> >> + Oid relid = PG_GETARG_OID(0);\n> >> + int64 result;\n> >> + PgStat_StatIndEntry *indentry;\n> >> +\n> >> + if ((indentry = pgstat_fetch_stat_indentry(relid)) == NULL)\n> >> + result = 0;\n> >> + else\n> >> + result = (int64) (indentry->blocks_fetched);\n> >> +\n> >> + PG_RETURN_INT64(result);\n> >> +}\n> >\n> > We have so many copies of this by now - perhaps we first should deduplicate\n> > them somehow? Even if it's just a macro or such.\n> >\n>\n> Yeah good point, a new macro has been defined for the \"int64\" return\n> case in V3 attached.\n\nI looked for other opportunities to de-duplicate, but most of the functions\nthat were added that are identical except the return type and\nPgStat_Kind are short enough that it doesn't make sense to make wrappers\nor macros.\n\nI do think it makes sense to reorder the members of the two structs\nPgStat_IndexCounts and PgStat_TableCounts so that they have a common\nheader. I've done that in the attached patch (0003).\n\nIn the flush functions, I was also thinking it might be nice to use the\nsame pattern as is used in [1] and [2] to add the counts together. It\nmakes the lines a bit easier to read, IMO. If we remove the prefixes\nfrom the count fields, this works for many of the fields. I've attached\na patch (0004) that does something like this, in case you wanted to go\nin this direction.\n\nSince you have made new parallel functions for indexes and tables for\nmany of the functions in pgstat_relation.c, perhaps it makes sense to\nsplit it into pgstat_table.c and pgstat_index.c?\n\nOne question I had about the original code (not related to your\nrefactor) is why the pending stats aren't memset in the flush functions\nafter aggregating them into the shared stats.\n\n- Melanie\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/utils/activity/pgstat_checkpointer.c#L49\n[2] https://github.com/postgres/postgres/blob/master/src/backend/utils/activity/pgstat_database.c#L370",
"msg_date": "Fri, 4 Nov 2022 16:51:46 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/4/22 9:51 PM, Melanie Plageman wrote:\n> Hi Bertrand,\n> \n> I'm glad you are working on this.\n> \n> I had a few thoughts/ideas\n> \n\nThanks for looking at it!\n\n> It seems better to have all of the counts in the various stats structs\n> not be prefixed with n_, i_, t_\n> \n> typedef struct PgStat_StatDBEntry\n> {\n> ...\n> PgStat_Counter n_blocks_fetched;\n> PgStat_Counter n_blocks_hit;\n> PgStat_Counter n_tuples_returned;\n> PgStat_Counter n_tuples_fetched;\n> ...\n> \n> I've attached a patch (0002) to change this in case you are interested\n> in making such a change\n\nI did not renamed initially the fields/columns to ease the review.\n\nIndeed, I think we should go further than removing the n_, i_ and t_ \nprefixes so that the fields actually match the view's columns.\n\nFor example, currently pg_stat_all_indexes.idx_tup_read is linked to \n\"tuples_returned\", so that it would make sense to rename \n\"tuples_returned\" to \"tuples_read\" or even \"tup_read\" in the indexes \ncounters.\n\nThat's why I had in mind to do this fields/columns renaming into a \nseparate patch (once this one is committed), so that the current one \nfocus only on splitting the stats: what do you think?\n\n> (I've attached all of my suggestions in patches\n> along with your original patch so that cfbot still passes).\n> \n> On Wed, Nov 2, 2022 at 5:00 AM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>> On 11/1/22 1:30 AM, Andres Freund wrote:\n>>> On 2022-10-31 14:14:15 +0100, Drouvot, Bertrand wrote:\n>>>> @@ -240,9 +293,23 @@ pg_stat_get_blocks_fetched(PG_FUNCTION_ARGS)\n>>>> PG_RETURN_INT64(result);\n>>>> }\n>>>>\n>>>> +Datum\n>>>> +pg_stat_get_index_blocks_fetched(PG_FUNCTION_ARGS)\n>>>> +{\n>>>> + Oid relid = PG_GETARG_OID(0);\n>>>> + int64 result;\n>>>> + PgStat_StatIndEntry *indentry;\n>>>> +\n>>>> + if ((indentry = pgstat_fetch_stat_indentry(relid)) == NULL)\n>>>> + result = 0;\n>>>> + else\n>>>> + result = (int64) (indentry->blocks_fetched);\n>>>> +\n>>>> + PG_RETURN_INT64(result);\n>>>> +}\n>>>\n>>> We have so many copies of this by now - perhaps we first should deduplicate\n>>> them somehow? Even if it's just a macro or such.\n>>>\n>>\n>> Yeah good point, a new macro has been defined for the \"int64\" return\n>> case in V3 attached.\n> \n> I looked for other opportunities to de-duplicate, but most of the functions\n> that were added that are identical except the return type and\n> PgStat_Kind are short enough that it doesn't make sense to make wrappers\n> or macros.\n> \n\nYeah, agree.\n\n> I do think it makes sense to reorder the members of the two structs\n> PgStat_IndexCounts and PgStat_TableCounts so that they have a common\n> header. I've done that in the attached patch (0003).\n> \n\nThat's a good idea, thanks! But for that we would need to have the same \nfields names, means:\n\n- Remove the prefixes (as you've done in 0002)\n- And probably reduce the number of fields in the new \nPgStat_RelationCounts that 003 is introducing (for example \ntuples_returned should be excluded if we're going to rename it later on \nto \"tuples_read\" for the indexes to map the \npg_stat_all_indexes.idx_tup_read column).\n\nISTM that we should do it in the \"renaming\" effort, after this patch is \ncommitted.\n\n> In the flush functions, I was also thinking it might be nice to use the\n> same pattern as is used in [1] and [2] to add the counts together. It\n> makes the lines a bit easier to read, IMO. If we remove the prefixes\n> from the count fields, this works for many of the fields. I've attached\n> a patch (0004) that does something like this, in case you wanted to go\n> in this direction.\n\nI like it too but same remarks as previously. I think it should be part \nof the \"renaming\" effort.\n\n> \n> Since you have made new parallel functions for indexes and tables for\n> many of the functions in pgstat_relation.c, perhaps it makes sense to\n> split it into pgstat_table.c and pgstat_index.c?\n\nGood point, thanks, I'll work on it.\n\n> \n> One question I had about the original code (not related to your\n> refactor) is why the pending stats aren't memset in the flush functions\n> after aggregating them into the shared stats.\n\nNot sure I'm getting your point, do you think something is not right \nwith the flush functions?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 14 Nov 2022 11:36:52 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/14/22 11:36 AM, Drouvot, Bertrand wrote:\n> On 11/4/22 9:51 PM, Melanie Plageman wrote:\n>> Since you have made new parallel functions for indexes and tables for\n>> many of the functions in pgstat_relation.c, perhaps it makes sense to\n>> split it into pgstat_table.c and pgstat_index.c?\n> \n> Good point, thanks, I'll work on it.\n> \n\nPlease find attached v4 adding pgstat_table.c and pgstat_index.c (and \nremoving pgstat_relation.c).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 15 Nov 2022 10:48:40 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 10:48:40 +0100, Drouvot, Bertrand wrote:\n> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n> index ae3365d917..be7f175bf1 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -36,24 +36,34 @@\n> \n> #define HAS_PGSTAT_PERMISSIONS(role)\t (has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n> \n> +#define PGSTAT_FETCH_STAT_ENTRY(entry, stat_name) ((entry == NULL) ? 0 : (int64) (entry->stat_name))\n> +\n> Datum\n> -pg_stat_get_numscans(PG_FUNCTION_ARGS)\n> +pg_stat_get_index_numscans(PG_FUNCTION_ARGS)\n> {\n> \tOid\t\t\trelid = PG_GETARG_OID(0);\n> \tint64\t\tresult;\n> -\tPgStat_StatTabEntry *tabentry;\n> +\tPgStat_StatIndEntry *indentry = pgstat_fetch_stat_indentry(relid);\n> \n> -\tif ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)\n> -\t\tresult = 0;\n> -\telse\n> -\t\tresult = (int64) (tabentry->numscans);\n> +\tresult = PGSTAT_FETCH_STAT_ENTRY(indentry, numscans);\n> \n> \tPG_RETURN_INT64(result);\n> }\n\nThis still leaves a fair bit of boilerplate. ISTM that the function body\nreally should just be a single line.\n\nMight even be worth defining the whole function via a macro. Perhaps something like\n\nPGSTAT_DEFINE_REL_FIELD_ACCESSOR(PGSTAT_KIND_INDEX, pg_stat_get_index, numscans);\n\nI think the logic to infer which DB oid to use for a stats entry could be\nshared between different kinds of stats. We don't need to duplicate it.\n\nIs there any reason to not replace the \"double lookup\" in\npgstat_fetch_stat_tabentry() with IsSharedRelation()?\n\n\nThis should probably be done in a preparatory commit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 16 Nov 2022 12:12:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/16/22 9:12 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-15 10:48:40 +0100, Drouvot, Bertrand wrote:\n>> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n>> index ae3365d917..be7f175bf1 100644\n>> --- a/src/backend/utils/adt/pgstatfuncs.c\n>> +++ b/src/backend/utils/adt/pgstatfuncs.c\n>> @@ -36,24 +36,34 @@\n>> \n>> #define HAS_PGSTAT_PERMISSIONS(role)\t (has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_STATS) || has_privs_of_role(GetUserId(), role))\n>> \n>> +#define PGSTAT_FETCH_STAT_ENTRY(entry, stat_name) ((entry == NULL) ? 0 : (int64) (entry->stat_name))\n>> +\n>> Datum\n>> -pg_stat_get_numscans(PG_FUNCTION_ARGS)\n>> +pg_stat_get_index_numscans(PG_FUNCTION_ARGS)\n>> {\n>> \tOid\t\t\trelid = PG_GETARG_OID(0);\n>> \tint64\t\tresult;\n>> -\tPgStat_StatTabEntry *tabentry;\n>> +\tPgStat_StatIndEntry *indentry = pgstat_fetch_stat_indentry(relid);\n>> \n>> -\tif ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)\n>> -\t\tresult = 0;\n>> -\telse\n>> -\t\tresult = (int64) (tabentry->numscans);\n>> +\tresult = PGSTAT_FETCH_STAT_ENTRY(indentry, numscans);\n>> \n>> \tPG_RETURN_INT64(result);\n>> }\n> \n> This still leaves a fair bit of boilerplate. ISTM that the function body\n> really should just be a single line.\n> \n> Might even be worth defining the whole function via a macro. Perhaps something like\n> \n> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(PGSTAT_KIND_INDEX, pg_stat_get_index, numscans);\n\nThanks for the feedback!\n\nRight, what about something like the following?\n\n\"\n#define PGSTAT_FETCH_STAT_ENTRY(pgstat_entry_kind, pgstat_fetch_stat_function, relid, stat_name) \\\n\tdo { \\\n\t\tpgstat_entry_kind *entry = pgstat_fetch_stat_function(relid); \\\n\t\tPG_RETURN_INT64(entry == NULL ? 0 : (int64) (entry->stat_name)); \\\n\t} while (0)\n\nDatum\npg_stat_get_index_numscans(PG_FUNCTION_ARGS)\n{\n\tPGSTAT_FETCH_STAT_ENTRY(PgStat_StatIndEntry, pgstat_fetch_stat_indentry, PG_GETARG_OID(0), numscans);\n}\n\"\n\n> \n> I think the logic to infer which DB oid to use for a stats entry could be\n> shared between different kinds of stats. We don't need to duplicate it.\n> \n\nAgree, will provide a new patch version once [1] is committed.\n\n\n> Is there any reason to not replace the \"double lookup\" in\n> pgstat_fetch_stat_tabentry() with IsSharedRelation()?\n> \n> \n\nThanks for the suggestion!\n\n> This should probably be done in a preparatory commit.\n\nProposal submitted in [1].\n\n[1]: https://www.postgresql.org/message-id/flat/2e4a0ae1-2696-9f0c-301c-2330e447133f%40gmail.com#e47bf5d2902121461b61ed47413628fc\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 18 Nov 2022 12:18:38 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-18 12:18:38 +0100, Drouvot, Bertrand wrote:\n> On 11/16/22 9:12 PM, Andres Freund wrote:\n> > This still leaves a fair bit of boilerplate. ISTM that the function body\n> > really should just be a single line.\n> > \n> > Might even be worth defining the whole function via a macro. Perhaps something like\n> > \n> > PGSTAT_DEFINE_REL_FIELD_ACCESSOR(PGSTAT_KIND_INDEX, pg_stat_get_index, numscans);\n> \n> Thanks for the feedback!\n> \n> Right, what about something like the following?\n> \n> \"\n> #define PGSTAT_FETCH_STAT_ENTRY(pgstat_entry_kind, pgstat_fetch_stat_function, relid, stat_name) \\\n> \tdo { \\\n> \t\tpgstat_entry_kind *entry = pgstat_fetch_stat_function(relid); \\\n> \t\tPG_RETURN_INT64(entry == NULL ? 0 : (int64) (entry->stat_name)); \\\n> \t} while (0)\n> \n> Datum\n> pg_stat_get_index_numscans(PG_FUNCTION_ARGS)\n> {\n> \tPGSTAT_FETCH_STAT_ENTRY(PgStat_StatIndEntry, pgstat_fetch_stat_indentry, PG_GETARG_OID(0), numscans);\n> }\n> \"\n\nThat's better, but still seems like quite a bit of repetition, given the\nnumber of accessors. I think I like my idea of a macro defining the whole\nfunction a bit better.\n\nI'd define a \"base\" macro and then a version that's specific to tables and\nindexes each, so that the pieces related to that don't have to be repeated as\noften.\n\n\n> > This should probably be done in a preparatory commit.\n> \n> Proposal submitted in [1].\n\nNow merged.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Nov 2022 15:19:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/21/22 12:19 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-18 12:18:38 +0100, Drouvot, Bertrand wrote:\n>> On 11/16/22 9:12 PM, Andres Freund wrote:\n>>> This still leaves a fair bit of boilerplate. ISTM that the function body\n>>> really should just be a single line.\n>>>\n>>> Might even be worth defining the whole function via a macro. Perhaps something like\n>>>\n>>> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(PGSTAT_KIND_INDEX, pg_stat_get_index, numscans);\n>>\n>> Thanks for the feedback!\n>>\n>> Right, what about something like the following?\n>>\n>> \"\n>> #define PGSTAT_FETCH_STAT_ENTRY(pgstat_entry_kind, pgstat_fetch_stat_function, relid, stat_name) \\\n>> \tdo { \\\n>> \t\tpgstat_entry_kind *entry = pgstat_fetch_stat_function(relid); \\\n>> \t\tPG_RETURN_INT64(entry == NULL ? 0 : (int64) (entry->stat_name)); \\\n>> \t} while (0)\n>>\n>> Datum\n>> pg_stat_get_index_numscans(PG_FUNCTION_ARGS)\n>> {\n>> \tPGSTAT_FETCH_STAT_ENTRY(PgStat_StatIndEntry, pgstat_fetch_stat_indentry, PG_GETARG_OID(0), numscans);\n>> }\n>> \"\n> \n> That's better, but still seems like quite a bit of repetition, given the\n> number of accessors. I think I like my idea of a macro defining the whole\n> function a bit better.\n> \n\nGot it, what about creating another preparatory commit to first introduce something like:\n\n\"\n#define PGSTAT_DEFINE_REL_FIELD_ACCESSOR(function_name_prefix, stat_name) \\\nDatum \\\nfunction_name_prefix##_##stat_name(PG_FUNCTION_ARGS) \\\n{ \\\nOid\t\t\trelid = PG_GETARG_OID(0); \\\nint64\t\tresult; \\\nPgStat_StatTabEntry *tabentry; \\\nif ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL) \\\n\tresult = 0; \\\nelse \\\n\tresult = (int64) (tabentry->stat_name); \\\nPG_RETURN_INT64(result); \\\n} \\\n\nPGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, numscans);\n\nPGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, tuples_returned);\n.\n.\n.\n\"\n\nIf that makes sense to you, I'll submit this preparatory patch.\n\n> Now merged.\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 21 Nov 2022 14:32:31 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 7:03 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> On 11/21/22 12:19 AM, Andres Freund wrote:\n> >\n> > That's better, but still seems like quite a bit of repetition, given the\n> > number of accessors. I think I like my idea of a macro defining the whole\n> > function a bit better.\n> >\n>\n> Got it, what about creating another preparatory commit to first introduce something like:\n>\n> \"\n> #define PGSTAT_DEFINE_REL_FIELD_ACCESSOR(function_name_prefix, stat_name) \\\n> Datum \\\n> function_name_prefix##_##stat_name(PG_FUNCTION_ARGS) \\\n> { \\\n> Oid relid = PG_GETARG_OID(0); \\\n> int64 result; \\\n> PgStat_StatTabEntry *tabentry; \\\n> if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL) \\\n> result = 0; \\\n> else \\\n> result = (int64) (tabentry->stat_name); \\\n> PG_RETURN_INT64(result); \\\n> } \\\n>\n> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, numscans);\n>\n> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, tuples_returned);\n> .\n> .\n> .\n> \"\n>\n> If that makes sense to you, I'll submit this preparatory patch.\n\nI think the macros stitching the function declarations and definitions\nis a great idea to avoid code duplicacy. We seem to be using that\napproach already - PG_FUNCTION_INFO_V1, SH_DECLARE, SH_DEFINE and its\nfriends, STEMMER_MODULE and so on. +1 for first applying this\nprinciple for existing functions. Looking forward to the patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 22 Nov 2022 11:49:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/22/22 7:19 AM, Bharath Rupireddy wrote:\n> On Mon, Nov 21, 2022 at 7:03 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> On 11/21/22 12:19 AM, Andres Freund wrote:\n>>>\n>>> That's better, but still seems like quite a bit of repetition, given the\n>>> number of accessors. I think I like my idea of a macro defining the whole\n>>> function a bit better.\n>>>\n>>\n>> Got it, what about creating another preparatory commit to first introduce something like:\n>>\n>> \"\n>> #define PGSTAT_DEFINE_REL_FIELD_ACCESSOR(function_name_prefix, stat_name) \\\n>> Datum \\\n>> function_name_prefix##_##stat_name(PG_FUNCTION_ARGS) \\\n>> { \\\n>> Oid relid = PG_GETARG_OID(0); \\\n>> int64 result; \\\n>> PgStat_StatTabEntry *tabentry; \\\n>> if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL) \\\n>> result = 0; \\\n>> else \\\n>> result = (int64) (tabentry->stat_name); \\\n>> PG_RETURN_INT64(result); \\\n>> } \\\n>>\n>> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, numscans);\n>>\n>> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, tuples_returned);\n>> .\n>> .\n>> .\n>> \"\n>>\n>> If that makes sense to you, I'll submit this preparatory patch.\n> \n> I think the macros stitching the function declarations and definitions\n> is a great idea to avoid code duplicacy. We seem to be using that\n> approach already - PG_FUNCTION_INFO_V1, SH_DECLARE, SH_DEFINE and its\n> friends, STEMMER_MODULE and so on. +1 for first applying this\n> principle for existing functions. Looking forward to the patch.\n> \n\nThanks! Patch proposal submitted in [1].\n\nI'll resume working on the current thread once [1] is committed.\n\n[1]: https://www.postgresql.org/message-id/d547a9bc-76c2-f875-df74-3ad6fd9d6236%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 22 Nov 2022 08:12:27 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/22/22 8:12 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 11/22/22 7:19 AM, Bharath Rupireddy wrote:\n>> On Mon, Nov 21, 2022 at 7:03 PM Drouvot, Bertrand\n>> <bertranddrouvot.pg@gmail.com> wrote:\n>>>\n>>> On 11/21/22 12:19 AM, Andres Freund wrote:\n>>>>\n>>>> That's better, but still seems like quite a bit of repetition, given the\n>>>> number of accessors. I think I like my idea of a macro defining the whole\n>>>> function a bit better.\n>>>>\n>>>\n>>> Got it, what about creating another preparatory commit to first introduce something like:\n>>>\n>>> \"\n>>> #define PGSTAT_DEFINE_REL_FIELD_ACCESSOR(function_name_prefix, stat_name) \\\n>>> Datum \\\n>>> function_name_prefix##_##stat_name(PG_FUNCTION_ARGS) \\\n>>> { \\\n>>> Oid relid = PG_GETARG_OID(0); \\\n>>> int64 result; \\\n>>> PgStat_StatTabEntry *tabentry; \\\n>>> if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL) \\\n>>> result = 0; \\\n>>> else \\\n>>> result = (int64) (tabentry->stat_name); \\\n>>> PG_RETURN_INT64(result); \\\n>>> } \\\n>>>\n>>> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, numscans);\n>>>\n>>> PGSTAT_DEFINE_REL_FIELD_ACCESSOR(pg_stat_get, tuples_returned);\n>>> .\n>>> .\n>>> .\n>>> \"\n>>>\n>>> If that makes sense to you, I'll submit this preparatory patch.\n>>\n>> I think the macros stitching the function declarations and definitions\n>> is a great idea to avoid code duplicacy. We seem to be using that\n>> approach already - PG_FUNCTION_INFO_V1, SH_DECLARE, SH_DEFINE and its\n>> friends, STEMMER_MODULE and so on. +1 for first applying this\n>> principle for existing functions. Looking forward to the patch.\n>>\n> \n> Thanks! Patch proposal submitted in [1].\n> \n> I'll resume working on the current thread once [1] is committed.\n> \n> [1]: https://www.postgresql.org/message-id/d547a9bc-76c2-f875-df74-3ad6fd9d6236%40gmail.com\n> \n\nAs [1] mentioned above has been committed (83a1a1b566), please find attached V5 related to this thread making use of the new macros (namely PG_STAT_GET_RELENTRY_INT64 and PG_STAT_GET_RELENTRY_TIMESTAMPTZ).\n\nI switched from using \"CppConcat\" to using \"##\", as it looks to me it's easier to read now that we are adding another concatenation to the game (due to the table/index split).\n\nThe (Tab,tab) or (Ind,ind) passed as arguments to the macros look \"weird\" (I don't have a better idea yet): purpose is to follow the naming convention for PgStat_StatTabEntry/PgStat_StatIndEntry and pgstat_fetch_stat_tabentry/pgstat_fetch_stat_indentry, thoughts?\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Dec 2022 20:11:08 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\n> Hi,\n> \n> As [1] mentioned above has been committed (83a1a1b566), please find attached V5 related to this thread making use of the new macros (namely PG_STAT_GET_RELENTRY_INT64 and PG_STAT_GET_RELENTRY_TIMESTAMPTZ).\n> \n> I switched from using \"CppConcat\" to using \"##\", as it looks to me it's easier to read now that we are adding another concatenation to the game (due to the table/index split).\n> \n> The (Tab,tab) or (Ind,ind) passed as arguments to the macros look \"weird\" (I don't have a better idea yet): purpose is to follow the naming convention for PgStat_StatTabEntry/PgStat_StatIndEntry and pgstat_fetch_stat_tabentry/pgstat_fetch_stat_indentry, thoughts?\n> \n> Looking forward to your feedback,\n> \n\nAttaching V6 (mandatory rebase due to 8018ffbf58).\n\nWhile at it, I got rid of the weirdness mentioned above by creating 2 sets of macros (one for the tables and one for the indexes).\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 7 Dec 2022 11:11:20 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 12/7/22 11:11 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n>> Hi,\n>>\n>> As [1] mentioned above has been committed (83a1a1b566), please find attached V5 related to this thread making use of the new macros (namely PG_STAT_GET_RELENTRY_INT64 and PG_STAT_GET_RELENTRY_TIMESTAMPTZ).\n>>\n>> I switched from using \"CppConcat\" to using \"##\", as it looks to me it's easier to read now that we are adding another concatenation to the game (due to the table/index split).\n>>\n>> The (Tab,tab) or (Ind,ind) passed as arguments to the macros look \"weird\" (I don't have a better idea yet): purpose is to follow the naming convention for PgStat_StatTabEntry/PgStat_StatIndEntry and pgstat_fetch_stat_tabentry/pgstat_fetch_stat_indentry, thoughts?\n>>\n>> Looking forward to your feedback,\n>>\n> \n> Attaching V6 (mandatory rebase due to 8018ffbf58).\n> \n> While at it, I got rid of the weirdness mentioned above by creating 2 sets of macros (one for the tables and one for the indexes).\n> \n> Looking forward to your feedback,\n> \n> Regards,\n> \n\nAttaching V7, mandatory rebase due to 66dcb09246.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 10 Dec 2022 10:54:44 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 12/10/22 10:54 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 12/7/22 11:11 AM, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>>> Hi,\n>>>\n>>> As [1] mentioned above has been committed (83a1a1b566), please find attached V5 related to this thread making use of the new macros (namely PG_STAT_GET_RELENTRY_INT64 and PG_STAT_GET_RELENTRY_TIMESTAMPTZ).\n>>>\n>>> I switched from using \"CppConcat\" to using \"##\", as it looks to me it's easier to read now that we are adding another concatenation to the game (due to the table/index split).\n>>>\n>>> The (Tab,tab) or (Ind,ind) passed as arguments to the macros look \"weird\" (I don't have a better idea yet): purpose is to follow the naming convention for PgStat_StatTabEntry/PgStat_StatIndEntry and pgstat_fetch_stat_tabentry/pgstat_fetch_stat_indentry, thoughts?\n>>>\n>>> Looking forward to your feedback,\n>>>\n>>\n>> Attaching V6 (mandatory rebase due to 8018ffbf58).\n>>\n>> While at it, I got rid of the weirdness mentioned above by creating 2 sets of macros (one for the tables and one for the indexes).\n>>\n>> Looking forward to your feedback,\n>>\n>> Regards,\n>>\n> \n> Attaching V7, mandatory rebase due to 66dcb09246.\n> \n\nAttaching V8, mandatory rebase due to c8e1ba736b.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 3 Jan 2023 15:19:18 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-03 15:19:18 +0100, Drouvot, Bertrand wrote:\n> diff --git a/src/backend/access/common/relation.c b/src/backend/access/common/relation.c\n> index 4017e175e3..fca166a063 100644\n> --- a/src/backend/access/common/relation.c\n> +++ b/src/backend/access/common/relation.c\n> @@ -73,7 +73,10 @@ relation_open(Oid relationId, LOCKMODE lockmode)\n> \tif (RelationUsesLocalBuffers(r))\n> \t\tMyXactFlags |= XACT_FLAGS_ACCESSEDTEMPNAMESPACE;\n> \n> -\tpgstat_init_relation(r);\n> +\tif (r->rd_rel->relkind == RELKIND_INDEX)\n> +\t\tpgstat_init_index(r);\n> +\telse\n> +\t\tpgstat_init_table(r);\n> \n> \treturn r;\n> }\n> @@ -123,7 +126,10 @@ try_relation_open(Oid relationId, LOCKMODE lockmode)\n> \tif (RelationUsesLocalBuffers(r))\n> \t\tMyXactFlags |= XACT_FLAGS_ACCESSEDTEMPNAMESPACE;\n> \n> -\tpgstat_init_relation(r);\n> +\tif (r->rd_rel->relkind == RELKIND_INDEX)\n> +\t\tpgstat_init_index(r);\n> +\telse\n> +\t\tpgstat_init_table(r);\n> \n> \treturn r;\n> }\n\nNot this patch's fault, but the functions in relation.c have gotten duplicated\nto an almost ridiculous degree :(\n\n\n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index 3fb38a25cf..98bb230b95 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -776,11 +776,19 @@ ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum,\n> \t * Read the buffer, and update pgstat counters to reflect a cache hit or\n> \t * miss.\n> \t */\n> -\tpgstat_count_buffer_read(reln);\n> +\tif (reln->rd_rel->relkind == RELKIND_INDEX)\n> +\t\tpgstat_count_index_buffer_read(reln);\n> +\telse\n> +\t\tpgstat_count_table_buffer_read(reln);\n> \tbuf = ReadBuffer_common(RelationGetSmgr(reln), reln->rd_rel->relpersistence,\n> \t\t\t\t\t\t\tforkNum, blockNum, mode, strategy, &hit);\n> \tif (hit)\n> -\t\tpgstat_count_buffer_hit(reln);\n> +\t{\n> +\t\tif (reln->rd_rel->relkind == RELKIND_INDEX)\n> +\t\t\tpgstat_count_index_buffer_hit(reln);\n> +\t\telse\n> +\t\t\tpgstat_count_table_buffer_hit(reln);\n> +\t}\n> \treturn buf;\n> }\n\nNot nice to have additional branches here :(.\n\nI think going forward we should move buffer stats to a \"per-relfilenode\" stats\nentry (which would allow to track writes too), then thiw branch would be\nremoved again.\n\n\n> +/* -------------------------------------------------------------------------\n> + *\n> + * pgstat_index.c\n> + *\t Implementation of index statistics.\n\nThis is a fair bit of duplicated code. Perhaps it'd be worth keeping\npgstat_relation with code common to table/index stats?\n\n\n> +bool\n> +pgstat_index_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)\n> +{\n> +\tstatic const PgStat_IndexCounts all_zeroes;\n> +\tOid\t\t\tdboid;\n> +\n> +\tPgStat_IndexStatus *lstats; /* pending stats entry */\n> +\tPgStatShared_Index *shrelcomstats;\n\nWhat does \"com\" stand for in shrelcomstats?\n\n\n> +\tPgStat_StatIndEntry *indentry;\t/* index entry of shared stats */\n> +\tPgStat_StatDBEntry *dbentry;\t/* pending database entry */\n> +\n> +\tdboid = entry_ref->shared_entry->key.dboid;\n> +\tlstats = (PgStat_IndexStatus *) entry_ref->pending;\n> +\tshrelcomstats = (PgStatShared_Index *) entry_ref->shared_stats;\n> +\n> +\t/*\n> +\t * Ignore entries that didn't accumulate any actual counts, such as\n> +\t * indexes that were opened by the planner but not used.\n> +\t */\n> +\tif (memcmp(&lstats->i_counts, &all_zeroes,\n> +\t\t\t sizeof(PgStat_IndexCounts)) == 0)\n> +\t{\n> +\t\treturn true;\n> +\t}\n\nI really need to propose pg_memiszero().\n\n\n\n> Datum\n> -pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n> +pg_stat_get_tab_xact_numscans(PG_FUNCTION_ARGS)\n> {\n> \tOid\t\t\trelid = PG_GETARG_OID(0);\n> \tint64\t\tresult;\n> @@ -1360,17 +1413,32 @@ pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n> \tPG_RETURN_INT64(result);\n> }\n> \n> +Datum\n> +pg_stat_get_ind_xact_numscans(PG_FUNCTION_ARGS)\n> +{\n> +\tOid\t\t\trelid = PG_GETARG_OID(0);\n> +\tint64\t\tresult;\n> +\tPgStat_IndexStatus *indentry;\n> +\n> +\tif ((indentry = find_indstat_entry(relid)) == NULL)\n> +\t\tresult = 0;\n> +\telse\n> +\t\tresult = (int64) (indentry->i_counts.i_numscans);\n> +\n> +\tPG_RETURN_INT64(result);\n> +}\n\nWhy didn't all these get converted to the same macro based approach as the\n!xact versions?\n\n\n> Datum\n> pg_stat_get_xact_tuples_returned(PG_FUNCTION_ARGS)\n> {\n> \tOid\t\t\trelid = PG_GETARG_OID(0);\n> \tint64\t\tresult;\n> -\tPgStat_TableStatus *tabentry;\n> +\tPgStat_IndexStatus *indentry;\n> \n> -\tif ((tabentry = find_tabstat_entry(relid)) == NULL)\n> +\tif ((indentry = find_indstat_entry(relid)) == NULL)\n> \t\tresult = 0;\n> \telse\n> -\t\tresult = (int64) (tabentry->t_counts.t_tuples_returned);\n> +\t\tresult = (int64) (indentry->i_counts.i_tuples_returned);\n> \n> \tPG_RETURN_INT64(result);\n> }\n\nThere's a bunch of changes like this, and I don't understand -\npg_stat_get_xact_tuples_returned() now looks at index stats, even though it\nafaics continues to be used in pg_stat_xact_all_tables? Huh?\n\n\n> +/* ----------\n> + * PgStat_IndexStatus\t\t\tPer-index status within a backend\n> + *\n> + * Many of the event counters are nontransactional, ie, we count events\n> + * in committed and aborted transactions alike. For these, we just count\n> + * directly in the PgStat_IndexStatus.\n> + * ----------\n> + */\n\nWhich counters are transactional for indexes? None, no?\n\n> diff --git a/src/test/recovery/t/029_stats_restart.pl b/src/test/recovery/t/029_stats_restart.pl\n> index 83d6647d32..8b0b597419 100644\n> --- a/src/test/recovery/t/029_stats_restart.pl\n> +++ b/src/test/recovery/t/029_stats_restart.pl\n> @@ -43,8 +43,8 @@ my $sect = \"initial\";\n> is(have_stats('database', $dboid, 0), 't', \"$sect: db stats do exist\");\n> is(have_stats('function', $dboid, $funcoid),\n> \t't', \"$sect: function stats do exist\");\n> -is(have_stats('relation', $dboid, $tableoid),\n> -\t't', \"$sect: relation stats do exist\");\n> +is(have_stats('table', $dboid, $tableoid),\n> +\t't', \"$sect: table stats do exist\");\n\nThink this should grow a test for an index too. There's not that much point in\nthe isolation case, because we don't have transactional stats, but here it\nseems worth testing?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Jan 2023 16:27:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 1/5/23 1:27 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-03 15:19:18 +0100, Drouvot, Bertrand wrote:\n>> diff --git a/src/backend/access/common/relation.c b/src/backend/access/common/relation.c\n>> index 4017e175e3..fca166a063 100644\n>> --- a/src/backend/access/common/relation.c\n>> +++ b/src/backend/access/common/relation.c\n>> @@ -73,7 +73,10 @@ relation_open(Oid relationId, LOCKMODE lockmode)\n>> \tif (RelationUsesLocalBuffers(r))\n>> \t\tMyXactFlags |= XACT_FLAGS_ACCESSEDTEMPNAMESPACE;\n>> \n>> -\tpgstat_init_relation(r);\n>> +\tif (r->rd_rel->relkind == RELKIND_INDEX)\n>> +\t\tpgstat_init_index(r);\n>> +\telse\n>> +\t\tpgstat_init_table(r);\n>> \n>> \treturn r;\n>> }\n>> @@ -123,7 +126,10 @@ try_relation_open(Oid relationId, LOCKMODE lockmode)\n>> \tif (RelationUsesLocalBuffers(r))\n>> \t\tMyXactFlags |= XACT_FLAGS_ACCESSEDTEMPNAMESPACE;\n>> \n>> -\tpgstat_init_relation(r);\n>> +\tif (r->rd_rel->relkind == RELKIND_INDEX)\n>> +\t\tpgstat_init_index(r);\n>> +\telse\n>> +\t\tpgstat_init_table(r);\n>> \n>> \treturn r;\n>> }\n> \n> Not this patch's fault, but the functions in relation.c have gotten duplicated\n> to an almost ridiculous degree :(\n> \n\nThanks for looking at it!\nRight, I'll have a look and will try to submit a dedicated patch for this.\n\n> \n>> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n>> index 3fb38a25cf..98bb230b95 100644\n>> --- a/src/backend/storage/buffer/bufmgr.c\n>> +++ b/src/backend/storage/buffer/bufmgr.c\n>> @@ -776,11 +776,19 @@ ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum,\n>> \t * Read the buffer, and update pgstat counters to reflect a cache hit or\n>> \t * miss.\n>> \t */\n>> -\tpgstat_count_buffer_read(reln);\n>> +\tif (reln->rd_rel->relkind == RELKIND_INDEX)\n>> +\t\tpgstat_count_index_buffer_read(reln);\n>> +\telse\n>> +\t\tpgstat_count_table_buffer_read(reln);\n>> \tbuf = ReadBuffer_common(RelationGetSmgr(reln), reln->rd_rel->relpersistence,\n>> \t\t\t\t\t\t\tforkNum, blockNum, mode, strategy, &hit);\n>> \tif (hit)\n>> -\t\tpgstat_count_buffer_hit(reln);\n>> +\t{\n>> +\t\tif (reln->rd_rel->relkind == RELKIND_INDEX)\n>> +\t\t\tpgstat_count_index_buffer_hit(reln);\n>> +\t\telse\n>> +\t\t\tpgstat_count_table_buffer_hit(reln);\n>> +\t}\n>> \treturn buf;\n>> }\n> \n> Not nice to have additional branches here :(.\n\nIndeed, but that does look like the price to pay for the moment ;-(\n\n> \n> I think going forward we should move buffer stats to a \"per-relfilenode\" stats\n> entry (which would allow to track writes too), then thiw branch would be\n> removed again.\n> \n> \n\nAgree. I think the best approach is to have this patch committed and then resume working on [1] (which would most probably introduce\nthe \"per-relfilenode\" stats.) Does this approach make sense to you?\n\n\n>> +/* -------------------------------------------------------------------------\n>> + *\n>> + * pgstat_index.c\n>> + *\t Implementation of index statistics.\n> \n> This is a fair bit of duplicated code. Perhaps it'd be worth keeping\n> pgstat_relation with code common to table/index stats?\n> \n\nGood point, will look at what can be done.\n\n> \n>> +bool\n>> +pgstat_index_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)\n>> +{\n>> +\tstatic const PgStat_IndexCounts all_zeroes;\n>> +\tOid\t\t\tdboid;\n>> +\n>> +\tPgStat_IndexStatus *lstats; /* pending stats entry */\n>> +\tPgStatShared_Index *shrelcomstats;\n> \n> What does \"com\" stand for in shrelcomstats?\n> \n\nOops, thanks!\n\nThis naming is coming from my first try while working on this subject (that I did not share).\nThe idea I had at that time was to create a PGSTAT_KIND_RELATION_COMMON stat type for common stats between tables and indexes\nand a dedicated one (PGSTAT_KIND_TABLE) for tables (given that indexes would have been fully part of the common one).\nBut it did not work well (specially as we want \"dedicated\" field names), so I preferred to submit the current proposal.\n\nWill fix this bad naming.\n\n> \n>> +\tPgStat_StatIndEntry *indentry;\t/* index entry of shared stats */\n>> +\tPgStat_StatDBEntry *dbentry;\t/* pending database entry */\n>> +\n>> +\tdboid = entry_ref->shared_entry->key.dboid;\n>> +\tlstats = (PgStat_IndexStatus *) entry_ref->pending;\n>> +\tshrelcomstats = (PgStatShared_Index *) entry_ref->shared_stats;\n>> +\n>> +\t/*\n>> +\t * Ignore entries that didn't accumulate any actual counts, such as\n>> +\t * indexes that were opened by the planner but not used.\n>> +\t */\n>> +\tif (memcmp(&lstats->i_counts, &all_zeroes,\n>> +\t\t\t sizeof(PgStat_IndexCounts)) == 0)\n>> +\t{\n>> +\t\treturn true;\n>> +\t}\n> \n> I really need to propose pg_memiszero().\n> \n\nOh yeah, great idea, that would be easier to read.\n\n> \n> \n>> Datum\n>> -pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n>> +pg_stat_get_tab_xact_numscans(PG_FUNCTION_ARGS)\n>> {\n>> \tOid\t\t\trelid = PG_GETARG_OID(0);\n>> \tint64\t\tresult;\n>> @@ -1360,17 +1413,32 @@ pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n>> \tPG_RETURN_INT64(result);\n>> }\n>> \n>> +Datum\n>> +pg_stat_get_ind_xact_numscans(PG_FUNCTION_ARGS)\n>> +{\n>> +\tOid\t\t\trelid = PG_GETARG_OID(0);\n>> +\tint64\t\tresult;\n>> +\tPgStat_IndexStatus *indentry;\n>> +\n>> +\tif ((indentry = find_indstat_entry(relid)) == NULL)\n>> +\t\tresult = 0;\n>> +\telse\n>> +\t\tresult = (int64) (indentry->i_counts.i_numscans);\n>> +\n>> +\tPG_RETURN_INT64(result);\n>> +}\n> \n> Why didn't all these get converted to the same macro based approach as the\n> !xact versions?\n> \n\nI think the \"benefits\" was not that \"big\" as compared to the number of non xact ones.\nBut, good point, now with the tables/indexes split I think it does: I'll submit a dedicated patch for it.\n\n> \n>> Datum\n>> pg_stat_get_xact_tuples_returned(PG_FUNCTION_ARGS)\n>> {\n>> \tOid\t\t\trelid = PG_GETARG_OID(0);\n>> \tint64\t\tresult;\n>> -\tPgStat_TableStatus *tabentry;\n>> +\tPgStat_IndexStatus *indentry;\n>> \n>> -\tif ((tabentry = find_tabstat_entry(relid)) == NULL)\n>> +\tif ((indentry = find_indstat_entry(relid)) == NULL)\n>> \t\tresult = 0;\n>> \telse\n>> -\t\tresult = (int64) (tabentry->t_counts.t_tuples_returned);\n>> +\t\tresult = (int64) (indentry->i_counts.i_tuples_returned);\n>> \n>> \tPG_RETURN_INT64(result);\n>> }\n> \n> There's a bunch of changes like this, and I don't understand -\n> pg_stat_get_xact_tuples_returned() now looks at index stats, even though it\n> afaics continues to be used in pg_stat_xact_all_tables? Huh?\n> \n> \n\nLooks like a mistake (I probably messed up while doing all those changes that \"look the same\"), thanks for pointing out!\nI'll go through each one and double check.\n\n>> +/* ----------\n>> + * PgStat_IndexStatus\t\t\tPer-index status within a backend\n>> + *\n>> + * Many of the event counters are nontransactional, ie, we count events\n>> + * in committed and aborted transactions alike. For these, we just count\n>> + * directly in the PgStat_IndexStatus.\n>> + * ----------\n>> + */\n> \n> Which counters are transactional for indexes? None, no?\n\nRight, will fix.\n\n> \n>> diff --git a/src/test/recovery/t/029_stats_restart.pl b/src/test/recovery/t/029_stats_restart.pl\n>> index 83d6647d32..8b0b597419 100644\n>> --- a/src/test/recovery/t/029_stats_restart.pl\n>> +++ b/src/test/recovery/t/029_stats_restart.pl\n>> @@ -43,8 +43,8 @@ my $sect = \"initial\";\n>> is(have_stats('database', $dboid, 0), 't', \"$sect: db stats do exist\");\n>> is(have_stats('function', $dboid, $funcoid),\n>> \t't', \"$sect: function stats do exist\");\n>> -is(have_stats('relation', $dboid, $tableoid),\n>> -\t't', \"$sect: relation stats do exist\");\n>> +is(have_stats('table', $dboid, $tableoid),\n>> +\t't', \"$sect: table stats do exist\");\n> \n> Think this should grow a test for an index too. There's not that much point in\n> the isolation case, because we don't have transactional stats, but here it\n> seems worth testing?\n> \n\n+1, will do.\n\n\n[1]: https://www.postgresql.org/message-id/flat/20221220181108.e5fddk3g7cive3v6%40alap3.anarazel.de#4efb4ea3593233bdb400bfb25eb30b81\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 11:03:55 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": ">> +/* -------------------------------------------------------------------------\n>> + *\n>> + * pgstat_index.c\n>> + * Implementation of index statistics.\n>\n> This is a fair bit of duplicated code. Perhaps it'd be worth keeping\n> pgstat_relation with code common to table/index stats?\n\n+1 to keep common functions/code between table and index stats. Only\nthe data structure should be different as the goal is to shrink the\ncurrent memory usage.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Jan 5, 2023 at 3:35 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On 1/5/23 1:27 AM, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-01-03 15:19:18 +0100, Drouvot, Bertrand wrote:\n> >> diff --git a/src/backend/access/common/relation.c b/src/backend/access/common/relation.c\n> >> index 4017e175e3..fca166a063 100644\n> >> --- a/src/backend/access/common/relation.c\n> >> +++ b/src/backend/access/common/relation.c\n> >> @@ -73,7 +73,10 @@ relation_open(Oid relationId, LOCKMODE lockmode)\n> >> if (RelationUsesLocalBuffers(r))\n> >> MyXactFlags |= XACT_FLAGS_ACCESSEDTEMPNAMESPACE;\n> >>\n> >> - pgstat_init_relation(r);\n> >> + if (r->rd_rel->relkind == RELKIND_INDEX)\n> >> + pgstat_init_index(r);\n> >> + else\n> >> + pgstat_init_table(r);\n> >>\n> >> return r;\n> >> }\n> >> @@ -123,7 +126,10 @@ try_relation_open(Oid relationId, LOCKMODE lockmode)\n> >> if (RelationUsesLocalBuffers(r))\n> >> MyXactFlags |= XACT_FLAGS_ACCESSEDTEMPNAMESPACE;\n> >>\n> >> - pgstat_init_relation(r);\n> >> + if (r->rd_rel->relkind == RELKIND_INDEX)\n> >> + pgstat_init_index(r);\n> >> + else\n> >> + pgstat_init_table(r);\n> >>\n> >> return r;\n> >> }\n> >\n> > Not this patch's fault, but the functions in relation.c have gotten duplicated\n> > to an almost ridiculous degree :(\n> >\n>\n> Thanks for looking at it!\n> Right, I'll have a look and will try to submit a dedicated patch for this.\n>\n> >\n> >> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> >> index 3fb38a25cf..98bb230b95 100644\n> >> --- a/src/backend/storage/buffer/bufmgr.c\n> >> +++ b/src/backend/storage/buffer/bufmgr.c\n> >> @@ -776,11 +776,19 @@ ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum,\n> >> * Read the buffer, and update pgstat counters to reflect a cache hit or\n> >> * miss.\n> >> */\n> >> - pgstat_count_buffer_read(reln);\n> >> + if (reln->rd_rel->relkind == RELKIND_INDEX)\n> >> + pgstat_count_index_buffer_read(reln);\n> >> + else\n> >> + pgstat_count_table_buffer_read(reln);\n> >> buf = ReadBuffer_common(RelationGetSmgr(reln), reln->rd_rel->relpersistence,\n> >> forkNum, blockNum, mode, strategy, &hit);\n> >> if (hit)\n> >> - pgstat_count_buffer_hit(reln);\n> >> + {\n> >> + if (reln->rd_rel->relkind == RELKIND_INDEX)\n> >> + pgstat_count_index_buffer_hit(reln);\n> >> + else\n> >> + pgstat_count_table_buffer_hit(reln);\n> >> + }\n> >> return buf;\n> >> }\n> >\n> > Not nice to have additional branches here :(.\n>\n> Indeed, but that does look like the price to pay for the moment ;-(\n>\n> >\n> > I think going forward we should move buffer stats to a \"per-relfilenode\" stats\n> > entry (which would allow to track writes too), then thiw branch would be\n> > removed again.\n> >\n> >\n>\n> Agree. I think the best approach is to have this patch committed and then resume working on [1] (which would most probably introduce\n> the \"per-relfilenode\" stats.) Does this approach make sense to you?\n>\n>\n> >> +/* -------------------------------------------------------------------------\n> >> + *\n> >> + * pgstat_index.c\n> >> + * Implementation of index statistics.\n> >\n> > This is a fair bit of duplicated code. Perhaps it'd be worth keeping\n> > pgstat_relation with code common to table/index stats?\n> >\n>\n> Good point, will look at what can be done.\n>\n> >\n> >> +bool\n> >> +pgstat_index_flush_cb(PgStat_EntryRef *entry_ref, bool nowait)\n> >> +{\n> >> + static const PgStat_IndexCounts all_zeroes;\n> >> + Oid dboid;\n> >> +\n> >> + PgStat_IndexStatus *lstats; /* pending stats entry */\n> >> + PgStatShared_Index *shrelcomstats;\n> >\n> > What does \"com\" stand for in shrelcomstats?\n> >\n>\n> Oops, thanks!\n>\n> This naming is coming from my first try while working on this subject (that I did not share).\n> The idea I had at that time was to create a PGSTAT_KIND_RELATION_COMMON stat type for common stats between tables and indexes\n> and a dedicated one (PGSTAT_KIND_TABLE) for tables (given that indexes would have been fully part of the common one).\n> But it did not work well (specially as we want \"dedicated\" field names), so I preferred to submit the current proposal.\n>\n> Will fix this bad naming.\n>\n> >\n> >> + PgStat_StatIndEntry *indentry; /* index entry of shared stats */\n> >> + PgStat_StatDBEntry *dbentry; /* pending database entry */\n> >> +\n> >> + dboid = entry_ref->shared_entry->key.dboid;\n> >> + lstats = (PgStat_IndexStatus *) entry_ref->pending;\n> >> + shrelcomstats = (PgStatShared_Index *) entry_ref->shared_stats;\n> >> +\n> >> + /*\n> >> + * Ignore entries that didn't accumulate any actual counts, such as\n> >> + * indexes that were opened by the planner but not used.\n> >> + */\n> >> + if (memcmp(&lstats->i_counts, &all_zeroes,\n> >> + sizeof(PgStat_IndexCounts)) == 0)\n> >> + {\n> >> + return true;\n> >> + }\n> >\n> > I really need to propose pg_memiszero().\n> >\n>\n> Oh yeah, great idea, that would be easier to read.\n>\n> >\n> >\n> >> Datum\n> >> -pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n> >> +pg_stat_get_tab_xact_numscans(PG_FUNCTION_ARGS)\n> >> {\n> >> Oid relid = PG_GETARG_OID(0);\n> >> int64 result;\n> >> @@ -1360,17 +1413,32 @@ pg_stat_get_xact_numscans(PG_FUNCTION_ARGS)\n> >> PG_RETURN_INT64(result);\n> >> }\n> >>\n> >> +Datum\n> >> +pg_stat_get_ind_xact_numscans(PG_FUNCTION_ARGS)\n> >> +{\n> >> + Oid relid = PG_GETARG_OID(0);\n> >> + int64 result;\n> >> + PgStat_IndexStatus *indentry;\n> >> +\n> >> + if ((indentry = find_indstat_entry(relid)) == NULL)\n> >> + result = 0;\n> >> + else\n> >> + result = (int64) (indentry->i_counts.i_numscans);\n> >> +\n> >> + PG_RETURN_INT64(result);\n> >> +}\n> >\n> > Why didn't all these get converted to the same macro based approach as the\n> > !xact versions?\n> >\n>\n> I think the \"benefits\" was not that \"big\" as compared to the number of non xact ones.\n> But, good point, now with the tables/indexes split I think it does: I'll submit a dedicated patch for it.\n>\n> >\n> >> Datum\n> >> pg_stat_get_xact_tuples_returned(PG_FUNCTION_ARGS)\n> >> {\n> >> Oid relid = PG_GETARG_OID(0);\n> >> int64 result;\n> >> - PgStat_TableStatus *tabentry;\n> >> + PgStat_IndexStatus *indentry;\n> >>\n> >> - if ((tabentry = find_tabstat_entry(relid)) == NULL)\n> >> + if ((indentry = find_indstat_entry(relid)) == NULL)\n> >> result = 0;\n> >> else\n> >> - result = (int64) (tabentry->t_counts.t_tuples_returned);\n> >> + result = (int64) (indentry->i_counts.i_tuples_returned);\n> >>\n> >> PG_RETURN_INT64(result);\n> >> }\n> >\n> > There's a bunch of changes like this, and I don't understand -\n> > pg_stat_get_xact_tuples_returned() now looks at index stats, even though it\n> > afaics continues to be used in pg_stat_xact_all_tables? Huh?\n> >\n> >\n>\n> Looks like a mistake (I probably messed up while doing all those changes that \"look the same\"), thanks for pointing out!\n> I'll go through each one and double check.\n>\n> >> +/* ----------\n> >> + * PgStat_IndexStatus Per-index status within a backend\n> >> + *\n> >> + * Many of the event counters are nontransactional, ie, we count events\n> >> + * in committed and aborted transactions alike. For these, we just count\n> >> + * directly in the PgStat_IndexStatus.\n> >> + * ----------\n> >> + */\n> >\n> > Which counters are transactional for indexes? None, no?\n>\n> Right, will fix.\n>\n> >\n> >> diff --git a/src/test/recovery/t/029_stats_restart.pl b/src/test/recovery/t/029_stats_restart.pl\n> >> index 83d6647d32..8b0b597419 100644\n> >> --- a/src/test/recovery/t/029_stats_restart.pl\n> >> +++ b/src/test/recovery/t/029_stats_restart.pl\n> >> @@ -43,8 +43,8 @@ my $sect = \"initial\";\n> >> is(have_stats('database', $dboid, 0), 't', \"$sect: db stats do exist\");\n> >> is(have_stats('function', $dboid, $funcoid),\n> >> 't', \"$sect: function stats do exist\");\n> >> -is(have_stats('relation', $dboid, $tableoid),\n> >> - 't', \"$sect: relation stats do exist\");\n> >> +is(have_stats('table', $dboid, $tableoid),\n> >> + 't', \"$sect: table stats do exist\");\n> >\n> > Think this should grow a test for an index too. There's not that much point in\n> > the isolation case, because we don't have transactional stats, but here it\n> > seems worth testing?\n> >\n>\n> +1, will do.\n>\n>\n> [1]: https://www.postgresql.org/message-id/flat/20221220181108.e5fddk3g7cive3v6%40alap3.anarazel.de#4efb4ea3593233bdb400bfb25eb30b81\n>\n> Regards,\n>\n> --\n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\n\n",
"msg_date": "Mon, 9 Jan 2023 17:08:42 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-09 17:08:42 +0530, Nitin Jadhav wrote:\n> +1 to keep common functions/code between table and index stats. Only\n> the data structure should be different as the goal is to shrink the\n> current memory usage.\n\nI don't think the goal is solely to shrink memory usage - it's also to make it\npossible to add more stats that are specific to just indexes or just\ntables. Of course that's related to memory usage...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 12:04:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 19:49, Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On 12/10/22 10:54 AM, Drouvot, Bertrand wrote:\n> > Hi,\n> >\n> > On 12/7/22 11:11 AM, Drouvot, Bertrand wrote:\n> >> Hi,\n> >>\n> >>> Hi,\n> >>>\n> >>> As [1] mentioned above has been committed (83a1a1b566), please find attached V5 related to this thread making use of the new macros (namely PG_STAT_GET_RELENTRY_INT64 and PG_STAT_GET_RELENTRY_TIMESTAMPTZ).\n> >>>\n> >>> I switched from using \"CppConcat\" to using \"##\", as it looks to me it's easier to read now that we are adding another concatenation to the game (due to the table/index split).\n> >>>\n> >>> The (Tab,tab) or (Ind,ind) passed as arguments to the macros look \"weird\" (I don't have a better idea yet): purpose is to follow the naming convention for PgStat_StatTabEntry/PgStat_StatIndEntry and pgstat_fetch_stat_tabentry/pgstat_fetch_stat_indentry, thoughts?\n> >>>\n> >>> Looking forward to your feedback,\n> >>>\n> >>\n> >> Attaching V6 (mandatory rebase due to 8018ffbf58).\n> >>\n> >> While at it, I got rid of the weirdness mentioned above by creating 2 sets of macros (one for the tables and one for the indexes).\n> >>\n> >> Looking forward to your feedback,\n> >>\n> >> Regards,\n> >>\n> >\n> > Attaching V7, mandatory rebase due to 66dcb09246.\n> >\n>\n> Attaching V8, mandatory rebase due to c8e1ba736b.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nd540a02a724b9643205abce8c5644a0f0908f6e3 ===\n=== applying patch ./v8-0001-split_tables_indexes_stats.patch\n....\npatching file src/backend/utils/activity/pgstat_table.c (renamed from\nsrc/backend/utils/activity/pgstat_relation.c)\nHunk #25 FAILED at 759.\n....\n1 out of 29 hunks FAILED -- saving rejects to file\nsrc/backend/utils/activity/pgstat_table.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3984.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:58:46 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 1/19/23 12:28 PM, vignesh C wrote:\n> On Tue, 3 Jan 2023 at 19:49, Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>> Attaching V8, mandatory rebase due to c8e1ba736b.\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> d540a02a724b9643205abce8c5644a0f0908f6e3 ===\n> === applying patch ./v8-0001-split_tables_indexes_stats.patch\n> ....\n> patching file src/backend/utils/activity/pgstat_table.c (renamed from\n> src/backend/utils/activity/pgstat_relation.c)\n> Hunk #25 FAILED at 759.\n> ....\n> 1 out of 29 hunks FAILED -- saving rejects to file\n> src/backend/utils/activity/pgstat_table.c.rej\n> \n> [1] - http://cfbot.cputube.org/patch_41_3984.log\n> \n\nThanks for the warning!\n\nPlease find attached a rebased version (previous comments\nup-thread still need to be addressed though).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 21 Jan 2023 06:42:51 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 06:42:51AM +0100, Drouvot, Bertrand wrote:\n> Please find attached a rebased version (previous comments\n> up-thread still need to be addressed though).\n\nThis patch has a lot of conflicts. Could you send a rebased version?\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 15:54:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 3/16/23 7:54 AM, Michael Paquier wrote:\n> On Sat, Jan 21, 2023 at 06:42:51AM +0100, Drouvot, Bertrand wrote:\n>> Please find attached a rebased version (previous comments\n>> up-thread still need to be addressed though).\n> \n> This patch has a lot of conflicts. Could you send a rebased version?\n\nThanks for looking at it!\n\nPlease find attached v10.\n\nPlease note that the previous comments\nup-thread still need to be addressed though, and that this patch is also somehow linked to the:\n\n\"Generate pg_stat_get_xact*() functions with Macros\" patch (see [1]) (which itself has some dependencies, see [2])\n\nMy plan was to get [1] done before resuming working on the \"Split index and table statistics into different types of stats\" one.\n\nRegards,\n\n[1]: https://commitfest.postgresql.org/42/4106/\n[2]: https://www.postgresql.org/message-id/11744b0e-5f7f-aba8-7d9c-2ff0a0c6e2b2%40gmail.com\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Mar 2023 10:24:32 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 10:24:32AM +0100, Drouvot, Bertrand wrote:\n> My plan was to get [1] done before resuming working on the \"Split\n> index and table statistics into different types of stats\" one.\n\nOkay, I was unsure what should be the order here. Let's see about [1]\nfirst, then.\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 20:59:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "On Thu, 16 Mar 2023 at 05:25, Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> My plan was to get [1] done before resuming working on the \"Split index and table statistics into different types of stats\" one.\n> [1]: https://commitfest.postgresql.org/42/4106/\n\n\nGenerate pg_stat_get_xact*() functions with Macros ([1]) was committed March 27.\n\nThere's only a few days left in this CF. Would you like to leave this\nhere? Should it be marked Needs Review or Ready for Commit? Or should\nwe move it to the next CF now?\n\n\n\n--\nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 3 Apr 2023 17:47:13 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 4/3/23 11:47 PM, Gregory Stark (as CFM) wrote:\n> On Thu, 16 Mar 2023 at 05:25, Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> My plan was to get [1] done before resuming working on the \"Split index and table statistics into different types of stats\" one.\n>> [1]: https://commitfest.postgresql.org/42/4106/\n> \n> \n> Generate pg_stat_get_xact*() functions with Macros ([1]) was committed March 27.\n> \n> There's only a few days left in this CF. Would you like to leave this\n> here? Should it be marked Needs Review or Ready for Commit? Or should\n> we move it to the next CF now?\n> \n> \n> \n\nI moved it to the next commitfest and marked the target version as v17.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Apr 2023 12:04:35 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "On Tue, Apr 04, 2023 at 12:04:35PM +0200, Drouvot, Bertrand wrote:\n> I moved it to the next commitfest and marked the target version as\n> v17.\n\nThanks for moving it. I think that we should be able to do a bit more\nwork for the switch to macros in pgstatfuncs.c, but this is going to\nrequire more review than the feature freeze date allow, I am afraid. \n--\nMichael",
"msg_date": "Tue, 4 Apr 2023 19:49:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "> On 4 Apr 2023, at 12:04, Drouvot, Bertrand <bertranddrouvot.pg@gmail.com> wrote:\n> On 4/3/23 11:47 PM, Gregory Stark (as CFM) wrote:\n>> On Thu, 16 Mar 2023 at 05:25, Drouvot, Bertrand\n>> <bertranddrouvot.pg@gmail.com> wrote:\n>>> \n>>> My plan was to get [1] done before resuming working on the \"Split index and table statistics into different types of stats\" one.\n>>> [1]: https://commitfest.postgresql.org/42/4106/\n>> Generate pg_stat_get_xact*() functions with Macros ([1]) was committed March 27.\n>> There's only a few days left in this CF. Would you like to leave this\n>> here? Should it be marked Needs Review or Ready for Commit? Or should\n>> we move it to the next CF now?\n> \n> I moved it to the next commitfest and marked the target version as v17.\n\nThis patch no longer applies (with tests failing when it did), and the thread\nhas stalled. I'm marking this returned with feedback for now, please feel free\nto resubmit to a future CF with a new version of the patch.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 11:14:21 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 7/10/23 11:14 AM, Daniel Gustafsson wrote:\n>> On 4 Apr 2023, at 12:04, Drouvot, Bertrand <bertranddrouvot.pg@gmail.com> wrote:\n>> On 4/3/23 11:47 PM, Gregory Stark (as CFM) wrote:\n>>> On Thu, 16 Mar 2023 at 05:25, Drouvot, Bertrand\n>>> <bertranddrouvot.pg@gmail.com> wrote:\n>>>>\n>>>> My plan was to get [1] done before resuming working on the \"Split index and table statistics into different types of stats\" one.\n>>>> [1]: https://commitfest.postgresql.org/42/4106/\n>>> Generate pg_stat_get_xact*() functions with Macros ([1]) was committed March 27.\n>>> There's only a few days left in this CF. Would you like to leave this\n>>> here? Should it be marked Needs Review or Ready for Commit? Or should\n>>> we move it to the next CF now?\n>>\n>> I moved it to the next commitfest and marked the target version as v17.\n> \n> This patch no longer applies (with tests failing when it did), and the thread\n> has stalled. I'm marking this returned with feedback for now, please feel free\n> to resubmit to a future CF with a new version of the patch.\n\nThanks for the update.\nI'll resume working on it and re-submit once done.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 4 Aug 2023 11:17:45 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 7/10/23 11:14 AM, Daniel Gustafsson wrote:\n>> On 4 Apr 2023, at 12:04, Drouvot, Bertrand <bertranddrouvot.pg@gmail.com> wrote:\n>> On 4/3/23 11:47 PM, Gregory Stark (as CFM) wrote:\n>>> On Thu, 16 Mar 2023 at 05:25, Drouvot, Bertrand\n>>> <bertranddrouvot.pg@gmail.com> wrote:\n>>>>\n>>>> My plan was to get [1] done before resuming working on the \"Split index and table statistics into different types of stats\" one.\n>>>> [1]: https://commitfest.postgresql.org/42/4106/\n>>> Generate pg_stat_get_xact*() functions with Macros ([1]) was committed March 27.\n>>> There's only a few days left in this CF. Would you like to leave this\n>>> here? Should it be marked Needs Review or Ready for Commit? Or should\n>>> we move it to the next CF now?\n>>\n>> I moved it to the next commitfest and marked the target version as v17.\n> \n> This patch no longer applies (with tests failing when it did), and the thread\n> has stalled. I'm marking this returned with feedback for now, please feel free\n> to resubmit to a future CF with a new version of the patch.\n> \n\nFWIW, attached a rebased version as V11.\n\nWill now work on addressing the up-thread remaining comments.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Nov 2023 09:26:56 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-13 09:26:56 +0100, Drouvot, Bertrand wrote:\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -799,11 +799,19 @@ ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum,\n> \t * Read the buffer, and update pgstat counters to reflect a cache hit or\n> \t * miss.\n> \t */\n> -\tpgstat_count_buffer_read(reln);\n> +\tif (reln->rd_rel->relkind == RELKIND_INDEX)\n> +\t\tpgstat_count_index_buffer_read(reln);\n> +\telse\n> +\t\tpgstat_count_table_buffer_read(reln);\n\nIt's not nice from a layering POV that we need this level of awareness in\nbufmgr.c. I wonder if this is an argument for first splitting out stats like\nblocks_hit, blocks_fetched into something like \"relfilenode stats\" - they're\nagnostic of the relkind. There aren't that many such stats right now,\nadmittedly, but I think we'll want to also track dirtied, written blocks on a\nper relation basis once we can (i.e. we key the relevant stats by relfilenode\ninstead of oid, so we can associate stats when writing out buffers).\n\n\n> +/*\n> + * Initialize a relcache entry to count access statistics. Called whenever an\n> + * index is opened.\n> + *\n> + * We assume that a relcache entry's pgstatind_info field is zeroed by relcache.c\n> + * when the relcache entry is made; thereafter it is long-lived data.\n> + *\n> + * This does not create a reference to a stats entry in shared memory, nor\n> + * allocate memory for the pending stats. That happens in\n> + * pgstat_assoc_index().\n> + */\n> +void\n> +pgstat_init_index(Relation rel)\n> +{\n> +\t/*\n> +\t * We only count stats for indexes\n> +\t */\n> +\tAssert(rel->rd_rel->relkind == RELKIND_INDEX);\n> +\n> +\tif (!pgstat_track_counts)\n> +\t{\n> +\t\tif (rel->pgstatind_info != NULL)\n> +\t\t\tpgstat_unlink_index(rel);\n> +\n> +\t\t/* We're not counting at all */\n> +\t\trel->pgstat_enabled = false;\n> +\t\trel->pgstatind_info = NULL;\n> +\t\treturn;\n> +\t}\n> +\n> +\trel->pgstat_enabled = true;\n> +}\n> +\n> +/*\n> + * Prepare for statistics for this index to be collected.\n> + *\n> + * This ensures we have a reference to the stats entry before stats can be\n> + * generated. That is important because an index drop in another\n> + * connection could otherwise lead to the stats entry being dropped, which then\n> + * later would get recreated when flushing stats.\n> + *\n> + * This is separate from pgstat_init_index() as it is not uncommon for\n> + * relcache entries to be opened without ever getting stats reported.\n> + */\n> +void\n> +pgstat_assoc_index(Relation rel)\n> +{\n> +\tAssert(rel->pgstat_enabled);\n> +\tAssert(rel->pgstatind_info == NULL);\n> +\n> +\t/* Else find or make the PgStat_IndexStatus entry, and update link */\n> +\trel->pgstatind_info = pgstat_prep_index_pending(RelationGetRelid(rel),\n> +\t\t\t\t\t\t\t\t\t\t\t\t\trel->rd_rel->relisshared);\n> +\n> +\t/* don't allow link a stats to multiple relcache entries */\n> +\tAssert(rel->pgstatind_info->relation == NULL);\n> +\n> +\t/* mark this relation as the owner */\n> +\trel->pgstatind_info->relation = rel;\n> +}\n> +\n> +/*\n> + * Break the mutual link between a relcache entry and pending index stats entry.\n> + * This must be called whenever one end of the link is removed.\n> + */\n> +void\n> +pgstat_unlink_index(Relation rel)\n> +{\n> +\n> +\tif (rel->pgstatind_info == NULL)\n> +\t\treturn;\n> +\n> +\t/* link sanity check for the index stats */\n> +\tif (rel->pgstatind_info)\n> +\t{\n> +\t\tAssert(rel->pgstatind_info->relation == rel);\n> +\t\trel->pgstatind_info->relation = NULL;\n> +\t\trel->pgstatind_info = NULL;\n> +\t}\n> +}\n> ...\n\nThis is a fair bit of duplicated code - perhaps we could have shared helpers?\n\n\n> +/* ----------\n> + * PgStat_IndexStatus\t\t\tPer-index status within a backend\n> + *\n> + * Many of the event counters are nontransactional, ie, we count events\n> + * in committed and aborted transactions alike. For these, we just count\n> + * directly in the PgStat_IndexStatus.\n> + * ----------\n> + */\n> +typedef struct PgStat_IndexStatus\n> +{\n> +\tOid\t\t\tr_id;\t\t\t/* relation's OID */\n> +\tbool\t\tr_shared;\t\t/* is it a shared catalog? */\n> +\tstruct PgStat_IndexXactStatus *trans;\t/* lowest subxact's counts */\n> +\tPgStat_IndexCounts counts;\t/* event counts to be sent */\n> +\tRelation\trelation;\t\t/* rel that is using this entry */\n> +} PgStat_IndexStatus;\n> +\n> /* ----------\n> * PgStat_TableXactStatus\t\tPer-table, per-subtransaction status\n> * ----------\n> @@ -227,6 +264,29 @@ typedef struct PgStat_TableXactStatus\n> } PgStat_TableXactStatus;\n> \n> \n> +/* ----------\n> + * PgStat_IndexXactStatus\t\tPer-index, per-subtransaction status\n> + * ----------\n> + */\n> +typedef struct PgStat_IndexXactStatus\n> +{\n> +\tPgStat_Counter tuples_inserted; /* tuples inserted in (sub)xact */\n> +\tPgStat_Counter tuples_updated;\t/* tuples updated in (sub)xact */\n> +\tPgStat_Counter tuples_deleted;\t/* tuples deleted in (sub)xact */\n> +\tbool\t\ttruncdropped;\t/* relation truncated/dropped in this\n> +\t\t\t\t\t\t\t\t * (sub)xact */\n> +\t/* tuples i/u/d prior to truncate/drop */\n> +\tPgStat_Counter inserted_pre_truncdrop;\n> +\tPgStat_Counter updated_pre_truncdrop;\n> +\tPgStat_Counter deleted_pre_truncdrop;\n> +\tint\t\t\tnest_level;\t\t/* subtransaction nest level */\n> +\t/* links to other structs for same relation: */\n> +\tstruct PgStat_IndexXactStatus *upper;\t/* next higher subxact if any */\n> +\tPgStat_IndexStatus *parent; /* per-table status */\n> +\t/* structs of same subxact level are linked here: */\n> +\tstruct PgStat_IndexXactStatus *next;\t/* next of same subxact */\n> +} PgStat_IndexXactStatus;\n\nI don't think much of this is used? It doesn't look like you're using most of\nthe fields. Which makes sense - there's not really the same transactional\nbehaviour for indexes as there is for tables.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 12:44:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn 11/13/23 9:44 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-13 09:26:56 +0100, Drouvot, Bertrand wrote:\n>> --- a/src/backend/storage/buffer/bufmgr.c\n>> +++ b/src/backend/storage/buffer/bufmgr.c\n>> @@ -799,11 +799,19 @@ ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum,\n>> \t * Read the buffer, and update pgstat counters to reflect a cache hit or\n>> \t * miss.\n>> \t */\n>> -\tpgstat_count_buffer_read(reln);\n>> +\tif (reln->rd_rel->relkind == RELKIND_INDEX)\n>> +\t\tpgstat_count_index_buffer_read(reln);\n>> +\telse\n>> +\t\tpgstat_count_table_buffer_read(reln);\n> \n> It's not nice from a layering POV that we need this level of awareness in\n> bufmgr.c. I wonder if this is an argument for first splitting out stats like\n> blocks_hit, blocks_fetched into something like \"relfilenode stats\" - they're\n> agnostic of the relkind. \n\nThanks for looking at it! Yeah I think that would make a lot of sense\nto track some stats per relfilenode.\n\n> There aren't that many such stats right now,\n> admittedly, but I think we'll want to also track dirtied, written blocks on a\n> per relation basis once we can (i.e. we key the relevant stats by relfilenode\n> instead of oid, so we can associate stats when writing out buffers).\n> \n> \n\nAgree. Then, I think that would make sense to start this effort before the\nsplit index/table one. I can work on a per relfilenode stat patch first.\n\nDoes this patch ordering make sense to you?\n\n1) Introduce per relfilenode stats\n2) Split index and table stats\n\n>> +/*\n>> + * Initialize a relcache entry to count access statistics. Called whenever an\n>> + * index is opened.\n>> + *\n>> + * We assume that a relcache entry's pgstatind_info field is zeroed by relcache.c\n>> + * when the relcache entry is made; thereafter it is long-lived data.\n>> + *\n>> + * This does not create a reference to a stats entry in shared memory, nor\n>> + * allocate memory for the pending stats. That happens in\n>> + * pgstat_assoc_index().\n>> + */\n>> +void\n>> +pgstat_init_index(Relation rel)\n>> +{\n>> +\t/*\n>> +\t * We only count stats for indexes\n>> +\t */\n>> +\tAssert(rel->rd_rel->relkind == RELKIND_INDEX);\n>> +\n>> +\tif (!pgstat_track_counts)\n>> +\t{\n>> +\t\tif (rel->pgstatind_info != NULL)\n>> +\t\t\tpgstat_unlink_index(rel);\n>> +\n>> +\t\t/* We're not counting at all */\n>> +\t\trel->pgstat_enabled = false;\n>> +\t\trel->pgstatind_info = NULL;\n>> +\t\treturn;\n>> +\t}\n>> +\n>> +\trel->pgstat_enabled = true;\n>> +}\n>> +\n>> +/*\n>> + * Prepare for statistics for this index to be collected.\n>> + *\n>> + * This ensures we have a reference to the stats entry before stats can be\n>> + * generated. That is important because an index drop in another\n>> + * connection could otherwise lead to the stats entry being dropped, which then\n>> + * later would get recreated when flushing stats.\n>> + *\n>> + * This is separate from pgstat_init_index() as it is not uncommon for\n>> + * relcache entries to be opened without ever getting stats reported.\n>> + */\n>> +void\n>> +pgstat_assoc_index(Relation rel)\n>> +{\n>> +\tAssert(rel->pgstat_enabled);\n>> +\tAssert(rel->pgstatind_info == NULL);\n>> +\n>> +\t/* Else find or make the PgStat_IndexStatus entry, and update link */\n>> +\trel->pgstatind_info = pgstat_prep_index_pending(RelationGetRelid(rel),\n>> +\t\t\t\t\t\t\t\t\t\t\t\t\trel->rd_rel->relisshared);\n>> +\n>> +\t/* don't allow link a stats to multiple relcache entries */\n>> +\tAssert(rel->pgstatind_info->relation == NULL);\n>> +\n>> +\t/* mark this relation as the owner */\n>> +\trel->pgstatind_info->relation = rel;\n>> +}\n>> +\n>> +/*\n>> + * Break the mutual link between a relcache entry and pending index stats entry.\n>> + * This must be called whenever one end of the link is removed.\n>> + */\n>> +void\n>> +pgstat_unlink_index(Relation rel)\n>> +{\n>> +\n>> +\tif (rel->pgstatind_info == NULL)\n>> +\t\treturn;\n>> +\n>> +\t/* link sanity check for the index stats */\n>> +\tif (rel->pgstatind_info)\n>> +\t{\n>> +\t\tAssert(rel->pgstatind_info->relation == rel);\n>> +\t\trel->pgstatind_info->relation = NULL;\n>> +\t\trel->pgstatind_info = NULL;\n>> +\t}\n>> +}\n>> ...\n> \n> This is a fair bit of duplicated code - perhaps we could have shared helpers?\n> \n\nYeah, I had it in mind and that was part of the \"Will now work on addressing the\nup-thread remaining comments\" remark I made up-thread.\n\n> \n>> +/* ----------\n>> + * PgStat_IndexStatus\t\t\tPer-index status within a backend\n>> + *\n>> + * Many of the event counters are nontransactional, ie, we count events\n>> + * in committed and aborted transactions alike. For these, we just count\n>> + * directly in the PgStat_IndexStatus.\n>> + * ----------\n>> + */\n>> +typedef struct PgStat_IndexStatus\n>> +{\n>> +\tOid\t\t\tr_id;\t\t\t/* relation's OID */\n>> +\tbool\t\tr_shared;\t\t/* is it a shared catalog? */\n>> +\tstruct PgStat_IndexXactStatus *trans;\t/* lowest subxact's counts */\n>> +\tPgStat_IndexCounts counts;\t/* event counts to be sent */\n>> +\tRelation\trelation;\t\t/* rel that is using this entry */\n>> +} PgStat_IndexStatus;\n>> +\n>> /* ----------\n>> * PgStat_TableXactStatus\t\tPer-table, per-subtransaction status\n>> * ----------\n>> @@ -227,6 +264,29 @@ typedef struct PgStat_TableXactStatus\n>> } PgStat_TableXactStatus;\n>> \n>> \n>> +/* ----------\n>> + * PgStat_IndexXactStatus\t\tPer-index, per-subtransaction status\n>> + * ----------\n>> + */\n>> +typedef struct PgStat_IndexXactStatus\n>> +{\n>> +\tPgStat_Counter tuples_inserted; /* tuples inserted in (sub)xact */\n>> +\tPgStat_Counter tuples_updated;\t/* tuples updated in (sub)xact */\n>> +\tPgStat_Counter tuples_deleted;\t/* tuples deleted in (sub)xact */\n>> +\tbool\t\ttruncdropped;\t/* relation truncated/dropped in this\n>> +\t\t\t\t\t\t\t\t * (sub)xact */\n>> +\t/* tuples i/u/d prior to truncate/drop */\n>> +\tPgStat_Counter inserted_pre_truncdrop;\n>> +\tPgStat_Counter updated_pre_truncdrop;\n>> +\tPgStat_Counter deleted_pre_truncdrop;\n>> +\tint\t\t\tnest_level;\t\t/* subtransaction nest level */\n>> +\t/* links to other structs for same relation: */\n>> +\tstruct PgStat_IndexXactStatus *upper;\t/* next higher subxact if any */\n>> +\tPgStat_IndexStatus *parent; /* per-table status */\n>> +\t/* structs of same subxact level are linked here: */\n>> +\tstruct PgStat_IndexXactStatus *next;\t/* next of same subxact */\n>> +} PgStat_IndexXactStatus;\n> \n> I don't think much of this is used? It doesn't look like you're using most of\n> the fields. Which makes sense - there's not really the same transactional\n> behaviour for indexes as there is for tables.\n> \n> \n\nFully agree. I had in mind to revisit this stuff too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Nov 2023 09:04:03 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Split index and table statistics into different types of stats"
},
{
"msg_contents": "Hi,\n\nOn Tue, Nov 14, 2023 at 09:04:03AM +0100, Drouvot, Bertrand wrote:\n> On 11/13/23 9:44 PM, Andres Freund wrote:\n> > Hi,\n> > \n> > It's not nice from a layering POV that we need this level of awareness in\n> > bufmgr.c. I wonder if this is an argument for first splitting out stats like\n> > blocks_hit, blocks_fetched into something like \"relfilenode stats\" - they're\n> > agnostic of the relkind.\n> \n> Thanks for looking at it! Yeah I think that would make a lot of sense\n> to track some stats per relfilenode.\n> \n> > There aren't that many such stats right now,\n> > admittedly, but I think we'll want to also track dirtied, written blocks on a\n> > per relation basis once we can (i.e. we key the relevant stats by relfilenode\n> > instead of oid, so we can associate stats when writing out buffers).\n> > \n> > \n> \n> Agree. Then, I think that would make sense to start this effort before the\n> split index/table one. I can work on a per relfilenode stat patch first.\n> \n> Does this patch ordering make sense to you?\n> \n> 1) Introduce per relfilenode stats\n> 2) Split index and table stats\n\nJust a quick update on this: I had a chat with Andres at pgconf.eu and we agreed\non the above ordering so that:\n\n1) I started working on relfilenode stats (I hope to be able to provide a POC\npatch soon).\n\n2) The CF entry [1] status related to this thread has been changed to \"Waiting\non Author\".\n\n[1]: https://commitfest.postgresql.org/47/4792/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 25 Jan 2024 08:36:17 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Split index and table statistics into different types of stats"
}
] |
[
{
"msg_contents": "Team,\n\nWhile on the road in Iowa visiting covered bridges I met up with an amazing\nindividual named Brent. Brent, is with a small organization named: Darpa.\n\nThey are using PostgreSQL + RLS + XPATH but unfortunately the performance\nhas been less than what those who use PostgreSQL on a daily basis would\nexpect. Specifically when using RLS + XPATH, the optimizer does not\nprefilter and will scan every row. This has taken their previous queries on\nanother database from sub second, to 25+ seconds on PostgreSQL.\n\nUnfortunately, we were at a covered bridge park (see attached) and weren't\nable to get deeper into the issue. That said, I did tell him that I would\nreport the issue. Maybe there is traction, maybe there isn't to get this\nissue fixed.\n\nMay the code be with you,\n\nJD\n-- \n\n - Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997\n - Founder and Co-Chair - https://postgresconf.org/\n - Founder - https://postgresql.us - United States PostgreSQL\n - Public speaker, published author, postgresql expert, and people\n believer.\n - Host - More than a refresh\n <https://commandprompt.com/about/more-than-a-refresh/>: A podcast about\n data and the people who wrangle it.",
"msg_date": "Mon, 31 Oct 2022 10:37:30 -0700",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": true,
"msg_subject": "RLS + XPATH"
},
{
"msg_contents": "On 10/31/22 13:37, Joshua Drake wrote:\n> Team,\n> \n> While on the road in Iowa visiting covered bridges I met up with an \n> amazing individual named Brent. Brent, is with a small organization \n> named: Darpa.\n> \n> They are using PostgreSQL + RLS + XPATH but unfortunately the \n> performance has been less than what those who use PostgreSQL on a daily \n> basis would expect. Specifically when using RLS + XPATH, the optimizer \n> does not prefilter and will scan every row. This has taken their \n> previous queries on another database from sub second, to 25+ seconds on \n> PostgreSQL.\n> \n> Unfortunately, we were at a covered bridge park (see attached) and \n> weren't able to get deeper into the issue. That said, I did tell him \n> that I would report the issue. Maybe there is traction, maybe there \n> isn't to get this issue fixed.\n\n\nThe related functions need to be marked leakproof to get decent \nperformance from RLS.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:55:03 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: RLS + XPATH"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a patchset to refactor heapgettup(), heapgettup_pagemode(),\nand heapgetpage(). heapgettup() and heapgettup_pagemode() have a lot of\nduplicated code, confusingly nested if statements, and unnecessary local\nvariables. While working on a feature for the AIO/DIO patchset, I\nnoticed that it was difficult to add new code to heapgettup() and\nheapgettup_pagemode() because of how the functions are written.\n\nI've taken a stab at refactoring them -- without generating less\nefficient code or causing regressions. I'm interested if people find it\nmore readable and if those with more assembly expertise see issues (new\nbranches added which are not highly predictable, etc). I took a look at\nthe assembly for those symbols compiled at O2 but am not experienced\nenough at analysis to come to any conclusions.\n\n- Melanie",
"msg_date": "Mon, 31 Oct 2022 14:37:39 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "heapgettup refactoring"
},
{
"msg_contents": "FYI:\n\n[18:51:54.707] ../src/backend/access/heap/heapam.c(720): warning C4098: 'heapgettup': 'void' function returning a value\n[18:51:54.707] ../src/backend/access/heap/heapam.c(850): warning C4098: 'heapgettup_pagemode': 'void' function returning a value\n\nFor some reason, MSVC is the only one to complain, and cfbot doesn't\ncurrently tell you about it. I have a patch to show that, which I'll\nsend $later.\n\n\n",
"msg_date": "Mon, 31 Oct 2022 17:37:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-31 14:37:39 -0400, Melanie Plageman wrote:\n> and heapgetpage(). heapgettup() and heapgettup_pagemode() have a lot of\n> duplicated code, confusingly nested if statements, and unnecessary local\n> variables. While working on a feature for the AIO/DIO patchset, I\n> noticed that it was difficult to add new code to heapgettup() and\n> heapgettup_pagemode() because of how the functions are written.\n\nThanks for working on this - the current state is quite painful.\n\n\n> From cde2d6720f4f5ab2531c22ad4a5f0d9e6ec1039d Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Wed, 26 Oct 2022 20:00:34 -0400\n> Subject: [PATCH v1 1/3] Remove breaks in HeapTupleSatisfiesVisibility\n>\n> breaks in HeapTupleSatisfiesVisibility were superfluous\n> ---\n> src/backend/access/heap/heapam_visibility.c | 7 -------\n> 1 file changed, 7 deletions(-)\n>\n> diff --git a/src/backend/access/heap/heapam_visibility.c b/src/backend/access/heap/heapam_visibility.c\n> index 6e33d1c881..dd5d5da190 100644\n> --- a/src/backend/access/heap/heapam_visibility.c\n> +++ b/src/backend/access/heap/heapam_visibility.c\n> @@ -1769,25 +1769,18 @@ HeapTupleSatisfiesVisibility(HeapTuple htup, Snapshot snapshot, Buffer buffer)\n> \t{\n> \t\tcase SNAPSHOT_MVCC:\n> \t\t\treturn HeapTupleSatisfiesMVCC(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t\tcase SNAPSHOT_SELF:\n> \t\t\treturn HeapTupleSatisfiesSelf(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t\tcase SNAPSHOT_ANY:\n> \t\t\treturn HeapTupleSatisfiesAny(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t\tcase SNAPSHOT_TOAST:\n> \t\t\treturn HeapTupleSatisfiesToast(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t\tcase SNAPSHOT_DIRTY:\n> \t\t\treturn HeapTupleSatisfiesDirty(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t\tcase SNAPSHOT_HISTORIC_MVCC:\n> \t\t\treturn HeapTupleSatisfiesHistoricMVCC(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t\tcase SNAPSHOT_NON_VACUUMABLE:\n> \t\t\treturn HeapTupleSatisfiesNonVacuumable(htup, snapshot, buffer);\n> -\t\t\tbreak;\n> \t}\n\nNot sure what the author of this code, a certain Mr Freund, was thinking when\nhe added those returns...\n\n\n> From 9d8b01960463dc64ff5b111d523ff80fce3017af Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Mon, 31 Oct 2022 13:40:06 -0400\n> Subject: [PATCH v1 2/3] Turn HeapKeyTest macro into function\n>\n> This should always be inlined appropriately now. It is easier to read as\n> a function. Also, remove unused include in catcache.c.\n> ---\n> src/backend/access/heap/heapam.c | 10 ++--\n> src/backend/utils/cache/catcache.c | 1 -\n> src/include/access/valid.h | 76 ++++++++++++------------------\n> 3 files changed, 36 insertions(+), 51 deletions(-)\n>\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 12be87efed..1c995faa12 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -716,8 +716,10 @@ heapgettup(HeapScanDesc scan,\n> \t\t\t\t\t\t\t\t\t\t\t\t\tsnapshot);\n>\n> \t\t\t\tif (valid && key != NULL)\n> -\t\t\t\t\tHeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd),\n> -\t\t\t\t\t\t\t\tnkeys, key, valid);\n> +\t\t\t\t{\n> +\t\t\t\t\tvalid = HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd),\n> +\t\t\t\t\t\t\t\tnkeys, key);\n> +\t\t\t\t}\n>\n> \t\t\t\tif (valid)\n> \t\t\t\t{\n\nsuperfluous parens.\n\n\n\n> --- a/src/include/access/valid.h\n> +++ b/src/include/access/valid.h\n> @@ -19,51 +19,35 @@\n> *\n> *\t\tTest a heap tuple to see if it satisfies a scan key.\n> */\n> -#define HeapKeyTest(tuple, \\\n> -\t\t\t\t\ttupdesc, \\\n> -\t\t\t\t\tnkeys, \\\n> -\t\t\t\t\tkeys, \\\n> -\t\t\t\t\tresult) \\\n> -do \\\n> -{ \\\n> -\t/* Use underscores to protect the variables passed in as parameters */ \\\n> -\tint\t\t\t__cur_nkeys = (nkeys); \\\n> -\tScanKey\t\t__cur_keys = (keys); \\\n> - \\\n> -\t(result) = true; /* may change */ \\\n> -\tfor (; __cur_nkeys--; __cur_keys++) \\\n> -\t{ \\\n> -\t\tDatum\t__atp; \\\n> -\t\tbool\t__isnull; \\\n> -\t\tDatum\t__test; \\\n> - \\\n> -\t\tif (__cur_keys->sk_flags & SK_ISNULL) \\\n> -\t\t{ \\\n> -\t\t\t(result) = false; \\\n> -\t\t\tbreak; \\\n> -\t\t} \\\n> - \\\n> -\t\t__atp = heap_getattr((tuple), \\\n> -\t\t\t\t\t\t\t __cur_keys->sk_attno, \\\n> -\t\t\t\t\t\t\t (tupdesc), \\\n> -\t\t\t\t\t\t\t &__isnull); \\\n> - \\\n> -\t\tif (__isnull) \\\n> -\t\t{ \\\n> -\t\t\t(result) = false; \\\n> -\t\t\tbreak; \\\n> -\t\t} \\\n> - \\\n> -\t\t__test = FunctionCall2Coll(&__cur_keys->sk_func, \\\n> -\t\t\t\t\t\t\t\t __cur_keys->sk_collation, \\\n> -\t\t\t\t\t\t\t\t __atp, __cur_keys->sk_argument); \\\n> - \\\n> -\t\tif (!DatumGetBool(__test)) \\\n> -\t\t{ \\\n> -\t\t\t(result) = false; \\\n> -\t\t\tbreak; \\\n> -\t\t} \\\n> -\t} \\\n> -} while (0)\n> +static inline bool\n> +HeapKeyTest(HeapTuple tuple, TupleDesc tupdesc, int nkeys, ScanKey keys)\n> +{\n> +\tint cur_nkeys = nkeys;\n> +\tScanKey cur_key = keys;\n> +\n> +\tfor (; cur_nkeys--; cur_key++)\n> +\t{\n> +\t\tDatum atp;\n> +\t\tbool isnull;\n> +\t\tDatum test;\n> +\n> +\t\tif (cur_key->sk_flags & SK_ISNULL)\n> +\t\t\treturn false;\n> +\n> +\t\tatp = heap_getattr(tuple, cur_key->sk_attno, tupdesc, &isnull);\n> +\n> +\t\tif (isnull)\n> +\t\t\treturn false;\n> +\n> +\t\ttest = FunctionCall2Coll(&cur_key->sk_func,\n> +\t\t\t\t\t\t\t\tcur_key->sk_collation,\n> +\t\t\t\t\t\t\t\tatp, cur_key->sk_argument);\n> +\n> +\t\tif (!DatumGetBool(test))\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\treturn true;\n> +}\n\nSeems like a simple and nice win in readability.\n\nI recall looking at this in the past and thinking that there was some\nadditional subtlety here, but I can't see what that'd be.\n\n\n\n> From a894ce38c488df6546392b9f3bd894b67edf951e Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Mon, 31 Oct 2022 13:40:29 -0400\n> Subject: [PATCH v1 3/3] Refactor heapgettup* and heapgetpage\n>\n> Simplify heapgettup(), heapgettup_pagemode(), and heapgetpage(). All\n> three contained several unnecessary local variables, duplicate code, and\n> nested if statements. Streamlining these improves readability and\n> extensibility.\n\nIt'd be nice to break this into slightly smaller chunks.\n\n\n> +\n> +static inline void heapgettup_no_movement(HeapScanDesc scan)\n> +{\n\nFWIW, for function definitions we keep the return type (and with that also the\nthe \"static inline\") on a separate line.\n\n\n> +\tItemId\t\tlpp;\n> +\tOffsetNumber lineoff;\n> +\tBlockNumber page;\n> +\tPage dp;\n> +\tHeapTuple\ttuple = &(scan->rs_ctup);\n> +\t/*\n> +\t* ``no movement'' scan direction: refetch prior tuple\n> +\t*/\n> +\n> +\t/* Since the tuple was previously fetched, needn't lock page here */\n> +\tif (!scan->rs_inited)\n> +\t{\n> +\t\tAssert(!BufferIsValid(scan->rs_cbuf));\n> +\t\ttuple->t_data = NULL;\n> +\t\treturn;\n\nIs it possible to have a no-movement scan with an uninitialized scan? That\ndoesn't really seem to make sense. At least that's how I understand the\nexplanation for NoMovement nearby:\n * dir == NoMovementScanDirection means \"re-fetch the tuple indicated\n * by scan->rs_ctup\".\n\nWe can't have a rs_ctup without an already started scan, no?\n\nLooks like this is pre-existing code that you just moved, but it still seems\nwrong.\n\n\n> +\t}\n> +\tpage = ItemPointerGetBlockNumber(&(tuple->t_self));\n> +\tif (page != scan->rs_cblock)\n> +\t\theapgetpage((TableScanDesc) scan, page);\n\n\nWe have a\n\tBlockNumber page;\nand\n\tPage\t\tdp;\nin this code which seems almost intentionally confusing. This again is a\npre-existing issue but perhaps we could clean it up first?\n\n\n\n> +static inline Page heapgettup_continue_page(HeapScanDesc scan, BlockNumber page, ScanDirection dir,\n> +\t\tint *linesleft, OffsetNumber *lineoff)\n> +{\n> +\tHeapTuple\ttuple = &(scan->rs_ctup);\n\nHm. Finding the next offset via rs_ctup doesn't seem quite right. For one,\nit's not actually that cheap to extract the offset from an ItemPointer because\nof the the way we pack it into ItemPointerData.\n\n\n> +\tPage dp = BufferGetPage(scan->rs_cbuf);\n> +\tTestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, dp);\n\nNewlines between definitions and code :)\n\nPerhaps worth asserting that the scan is initialized and that rs_cbuf is valid?\n\n\n> +\tif (ScanDirectionIsForward(dir))\n> +\t{\n> +\t\t*lineoff = OffsetNumberNext(ItemPointerGetOffsetNumber(&(tuple->t_self)));\n> +\t\t*linesleft = PageGetMaxOffsetNumber(dp) - (*lineoff) + 1;\n\nWe can't access PageGetMaxOffsetNumber etc without holding a lock on the\npage. It's not immediately obvious that that is held in all paths.\n\n\n> +static inline BlockNumber heapgettup_initial_page(HeapScanDesc scan, ScanDirection dir)\n> +{\n> +\tAssert(!ScanDirectionIsNoMovement(dir));\n> +\tAssert(!scan->rs_inited);\n\nIs there a reason we couldn't set rs_inited in here, rather than reapeating\nthat in all callers?\n\n\nISTM that this function should deal with the\n\t\t\t/*\n\t\t\t * return null immediately if relation is empty\n\t\t\t */\n\nlogic, I think you now are repeating that check on every call to heapgettup().\n\n\n> @@ -511,182 +711,55 @@ heapgettup(HeapScanDesc scan,\n> \t\t ScanKey key)\n> {\n> \tHeapTuple\ttuple = &(scan->rs_ctup);\n> -\tSnapshot\tsnapshot = scan->rs_base.rs_snapshot;\n> -\tbool\t\tbackward = ScanDirectionIsBackward(dir);\n> \tBlockNumber page;\n> -\tbool\t\tfinished;\n> \tPage\t\tdp;\n> -\tint\t\t\tlines;\n> \tOffsetNumber lineoff;\n> \tint\t\t\tlinesleft;\n> -\tItemId\t\tlpp;\n> +\n> +\tif (ScanDirectionIsNoMovement(dir))\n> +\t\treturn heapgettup_no_movement(scan);\n\nMaybe add an unlikely() - this path is barely ever used...\n\n\n> \t/*\n> -\t * calculate next starting lineoff, given scan direction\n> +\t * return null immediately if relation is empty\n> \t */\n> -\tif (ScanDirectionIsForward(dir))\n> +\tif (scan->rs_nblocks == 0 || scan->rs_numblocks == 0)\n> \t{\n\nAs mentioned above, I don't think we should repeat the nblocks check on every\ncall.\n\n\n> +\t\tpage = scan->rs_cblock;\n> +\t\tLockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);\n> +\t\tdp = heapgettup_continue_page(scan, page, dir, &linesleft, &lineoff);\n> +\t\tgoto continue_page;\n> \t}\n>\n> \t/*\n> \t * advance the scan until we find a qualifying tuple or run out of stuff\n> \t * to scan\n> \t */\n> -\tlpp = PageGetItemId(dp, lineoff);\n> -\tfor (;;)\n> +\twhile (page != InvalidBlockNumber)\n> \t{\n> +\t\theapgetpage((TableScanDesc) scan, page);\n> +\t\tLockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);\n> +\t\tdp = heapgettup_start_page(scan, page, dir, &linesleft, &lineoff);\n> +\tcontinue_page:\n\n\nI don't like the goto continue_page at all. Seems that the paths leading here\nshould call LockBuffer(), heapgettup_start_page() etc? Possibly a do {} while\n() loop could do the trick as well.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 31 Oct 2022 18:09:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "Thanks for the review!\nAttached is v2 with feedback addressed.\n\nOn Mon, Oct 31, 2022 at 9:09 PM Andres Freund <andres@anarazel.de> wrote:\n> > From 9d8b01960463dc64ff5b111d523ff80fce3017af Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Mon, 31 Oct 2022 13:40:06 -0400\n> > Subject: [PATCH v1 2/3] Turn HeapKeyTest macro into function\n> >\n> > This should always be inlined appropriately now. It is easier to read as\n> > a function. Also, remove unused include in catcache.c.\n> > ---\n> > src/backend/access/heap/heapam.c | 10 ++--\n> > src/backend/utils/cache/catcache.c | 1 -\n> > src/include/access/valid.h | 76 ++++++++++++------------------\n> > 3 files changed, 36 insertions(+), 51 deletions(-)\n> >\n> > diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> > index 12be87efed..1c995faa12 100644\n> > --- a/src/backend/access/heap/heapam.c\n> > +++ b/src/backend/access/heap/heapam.c\n> > @@ -716,8 +716,10 @@ heapgettup(HeapScanDesc scan,\n> > snapshot);\n> >\n> > if (valid && key != NULL)\n> > - HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd),\n> > - nkeys, key, valid);\n> > + {\n> > + valid = HeapKeyTest(tuple, RelationGetDescr(scan->rs_base.rs_rd),\n> > + nkeys, key);\n> > + }\n> >\n> > if (valid)\n> > {\n>\n> superfluous parens.\n\nfixed.\n\n> > From a894ce38c488df6546392b9f3bd894b67edf951e Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Mon, 31 Oct 2022 13:40:29 -0400\n> > Subject: [PATCH v1 3/3] Refactor heapgettup* and heapgetpage\n> >\n> > Simplify heapgettup(), heapgettup_pagemode(), and heapgetpage(). All\n> > three contained several unnecessary local variables, duplicate code, and\n> > nested if statements. Streamlining these improves readability and\n> > extensibility.\n>\n> It'd be nice to break this into slightly smaller chunks.\n\nI can do that. Since incorporating feedback will be harder once I break\nit up into smaller chunks, I'm inclined to wait to do so until I know\nthat the structure I have now is the one we will go with. (I know smaller\nchunks will make it more reviewable.)\n\n> > +\n> > +static inline void heapgettup_no_movement(HeapScanDesc scan)\n> > +{\n>\n> FWIW, for function definitions we keep the return type (and with that also the\n> the \"static inline\") on a separate line.\n\nFixed\n\n>\n> > + ItemId lpp;\n> > + OffsetNumber lineoff;\n> > + BlockNumber page;\n> > + Page dp;\n> > + HeapTuple tuple = &(scan->rs_ctup);\n> > + /*\n> > + * ``no movement'' scan direction: refetch prior tuple\n> > + */\n> > +\n> > + /* Since the tuple was previously fetched, needn't lock page here */\n> > + if (!scan->rs_inited)\n> > + {\n> > + Assert(!BufferIsValid(scan->rs_cbuf));\n> > + tuple->t_data = NULL;\n> > + return;\n>\n> Is it possible to have a no-movement scan with an uninitialized scan? That\n> doesn't really seem to make sense. At least that's how I understand the\n> explanation for NoMovement nearby:\n> * dir == NoMovementScanDirection means \"re-fetch the tuple indicated\n> * by scan->rs_ctup\".\n>\n> We can't have a rs_ctup without an already started scan, no?\n>\n> Looks like this is pre-existing code that you just moved, but it still seems\n> wrong.\n\nChanged to an assert\n\n>\n> > + }\n> > + page = ItemPointerGetBlockNumber(&(tuple->t_self));\n> > + if (page != scan->rs_cblock)\n> > + heapgetpage((TableScanDesc) scan, page);\n>\n>\n> We have a\n> BlockNumber page;\n> and\n> Page dp;\n> in this code which seems almost intentionally confusing. This again is a\n> pre-existing issue but perhaps we could clean it up first?\n\nin attached\npage -> block\ndp -> page\nin basically all locations in heapam.c (should that be its own commit?)\n\n> > +static inline Page heapgettup_continue_page(HeapScanDesc scan, BlockNumber page, ScanDirection dir,\n> > + int *linesleft, OffsetNumber *lineoff)\n> > +{\n> > + HeapTuple tuple = &(scan->rs_ctup);\n>\n> Hm. Finding the next offset via rs_ctup doesn't seem quite right. For one,\n> it's not actually that cheap to extract the offset from an ItemPointer because\n> of the the way we pack it into ItemPointerData.\n\nSo, it was like this before [1].\nWhat about saving the lineoff in rs_cindex.\n\nIt is smaller, but that seems okay, right?\n\n> > + Page dp = BufferGetPage(scan->rs_cbuf);\n> > + TestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, dp);\n>\n> Newlines between definitions and code :)\n\nk\n\n> Perhaps worth asserting that the scan is initialized and that rs_cbuf is valid?\n\nindeed.\n\n>\n> > + if (ScanDirectionIsForward(dir))\n> > + {\n> > + *lineoff = OffsetNumberNext(ItemPointerGetOffsetNumber(&(tuple->t_self)));\n> > + *linesleft = PageGetMaxOffsetNumber(dp) - (*lineoff) + 1;\n>\n> We can't access PageGetMaxOffsetNumber etc without holding a lock on the\n> page. It's not immediately obvious that that is held in all paths.\n\nIn heapgettup() I call LockBuffer() before invoking\nheapgettup_continue_page() and heapgettup_start_page() which are the\nones doing this.\n\nI did have big plans for using the continue_page and start_page\nfunctions in heapgettup_pagemode() as well, but since I'm not doing that\nnow, I can add in an expectation that the lock is held.\n\nI added a comment saying the caller is responsible for acquiring the\nlock if needed. I thought of adding an assert, but I don't see that\nbeing done outside of bufmgr.c\n\n BufferDesc *bufHdr = GetBufferDescriptor(buffer - 1);\n Assert(LWLockHeldByMe(BufferDescriptorGetContentLock(bufHdr)));\n\n> > +static inline BlockNumber heapgettup_initial_page(HeapScanDesc scan, ScanDirection dir)\n> > +{\n> > + Assert(!ScanDirectionIsNoMovement(dir));\n> > + Assert(!scan->rs_inited);\n>\n> Is there a reason we couldn't set rs_inited in here, rather than reapeating\n> that in all callers?\n\nI wasn't sure if future callers or existing callers in the future may\nneed to do steps other than what is in heapgettup_initial_page() before\nsetting rs_inited. I thought of the responsibility of\nheapgettup_initial_page() as returning the initial page to start the\nscan. If it is going to do all initialization steps, perhaps the name\nshould change? I thought having a function that says it does\ninitialization of the scan might be confusing since initscan() also\nexists.\n\n> ISTM that this function should deal with the\n> /*\n> * return null immediately if relation is empty\n> */\n>\n> logic, I think you now are repeating that check on every call to heapgettup().\n\nSo, that's a good point. If I move setting rs_inited inside of\nheapgettup_initial_page(), then I can also easily move the empty table\ncheck inside there too.\n\nI don't want to set rs_inited before every return in\nheapgettup_initial_page(). Do you think there are any issues with\nsetting it at the top of the function?\n\nI thought about setting it at the very top (even before checking if the\nrelation is empty) Is it okay to set it before the empty table check?\nrs_inited will be set to false at the bottom before returning. But,\nmaybe this will be an issue in other callers of\nheapgettup_initial_page()?\n\nAnyway, I have changed it in attached v2.\n\n> > @@ -511,182 +711,55 @@ heapgettup(HeapScanDesc scan,\n> > ScanKey key)\n> > {\n> > HeapTuple tuple = &(scan->rs_ctup);\n> > - Snapshot snapshot = scan->rs_base.rs_snapshot;\n> > - bool backward = ScanDirectionIsBackward(dir);\n> > BlockNumber page;\n> > - bool finished;\n> > Page dp;\n> > - int lines;\n> > OffsetNumber lineoff;\n> > int linesleft;\n> > - ItemId lpp;\n> > +\n> > + if (ScanDirectionIsNoMovement(dir))\n> > + return heapgettup_no_movement(scan);\n>\n> Maybe add an unlikely() - this path is barely ever used...\n\ndone.\n\n> > + page = scan->rs_cblock;\n> > + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);\n> > + dp = heapgettup_continue_page(scan, page, dir, &linesleft, &lineoff);\n> > + goto continue_page;\n> > }\n> >\n> > /*\n> > * advance the scan until we find a qualifying tuple or run out of stuff\n> > * to scan\n> > */\n> > - lpp = PageGetItemId(dp, lineoff);\n> > - for (;;)\n> > + while (page != InvalidBlockNumber)\n> > {\n> > + heapgetpage((TableScanDesc) scan, page);\n> > + LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);\n> > + dp = heapgettup_start_page(scan, page, dir, &linesleft, &lineoff);\n> > + continue_page:\n>\n>\n> I don't like the goto continue_page at all. Seems that the paths leading here\n> should call LockBuffer(), heapgettup_start_page() etc? Possibly a do {} while\n> () loop could do the trick as well.\n\nI don't see how a do while loop would solve help with the problem.\nWe need to check if the block number is valid after getting a block\nassignment before doing heapgetpage() (e.g. after\nheapgettup_initial_page() and after heapgettup_advance_page()).\n\nRemoving the goto continue_page means adding the heapgettpage(),\nheapgettup_start_page(), etc code block in two places now (both after\nheapgettup_initial_page() and after heapgettup_advance_page()) and, in\nboth locations we have to add an if statement to check if the block is\nvalid. I feel like this makes the function longer and harder to\nunderstand. Keeping the loop as short as possible makes it clear what it\nis doing. I think that with an appropriate warning comment, the goto\ncontinue_page is clearer and easier to understand. To me, starting a\npage at the top of the outer loop, then looping through the lines in the\npage and is the structure that makes the most sense.\n\n- Melanie\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/access/heap/heapam.c#L572",
"msg_date": "Fri, 4 Nov 2022 11:51:03 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On 04.11.22 16:51, Melanie Plageman wrote:\n> Thanks for the review!\n> Attached is v2 with feedback addressed.\n\nYour 0001 had already been pushed.\n\nI have pushed your 0002.\n\nI have also pushed the renaming of page -> block, dp -> page separately. \n This should reduce the size of your 0003 a bit.\n\nPlease produce an updated version of the 0003 page for further review.\n\n\n\n",
"msg_date": "Wed, 16 Nov 2022 16:49:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 10:49 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 04.11.22 16:51, Melanie Plageman wrote:\n> > Thanks for the review!\n> > Attached is v2 with feedback addressed.\n>\n> Your 0001 had already been pushed.\n>\n> I have pushed your 0002.\n>\n> I have also pushed the renaming of page -> block, dp -> page separately.\n> This should reduce the size of your 0003 a bit.\n>\n> Please produce an updated version of the 0003 page for further review.\n\nThanks for looking at this!\nI have attached a patchset with only the code changes contained in the\nprevious patch 0003. I have broken the refactoring down into many\nsmaller pieces for ease of review.\n\n- Melanie",
"msg_date": "Wed, 30 Nov 2022 17:34:50 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On 30.11.22 23:34, Melanie Plageman wrote:\n> I have attached a patchset with only the code changes contained in the\n> previous patch 0003. I have broken the refactoring down into many\n> smaller pieces for ease of review.\n\nTo keep this moving along a bit, I have committed your 0002, which I \nthink is a nice little improvement on its own.\n\n\n\n",
"msg_date": "Mon, 2 Jan 2023 11:22:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 5:23 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 30.11.22 23:34, Melanie Plageman wrote:\n> > I have attached a patchset with only the code changes contained in the\n> > previous patch 0003. I have broken the refactoring down into many\n> > smaller pieces for ease of review.\n>\n> To keep this moving along a bit, I have committed your 0002, which I\n> think is a nice little improvement on its own.\n\nThanks!\nI've attached a rebased patchset - v4.\n\nI also changed heapgettup_no_movement() to noinline (instead of inline).\nDavid Rowley pointed out that this might make more sense given how\ncomparatively rare no movement scans are.\n\n- Melanie",
"msg_date": "Tue, 3 Jan 2023 15:39:37 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On 03.01.23 21:39, Melanie Plageman wrote:\n>> On 30.11.22 23:34, Melanie Plageman wrote:\n>>> I have attached a patchset with only the code changes contained in the\n>>> previous patch 0003. I have broken the refactoring down into many\n>>> smaller pieces for ease of review.\n>>\n>> To keep this moving along a bit, I have committed your 0002, which I\n>> think is a nice little improvement on its own.\n> \n> Thanks!\n> I've attached a rebased patchset - v4.\n> \n> I also changed heapgettup_no_movement() to noinline (instead of inline).\n> David Rowley pointed out that this might make more sense given how\n> comparatively rare no movement scans are.\n\nOk, let's look through these patches starting from the top then.\n\nv4-0001-Add-no-movement-scan-helper.patch\n\nThis makes sense overall; there is clearly some duplicate code that can \nbe unified.\n\nIt appears that during your rebasing you have effectively reverted your \nearlier changes that have been committed as \n8e1db29cdbbd218ab6ba53eea56624553c3bef8c. You should undo that.\n\nI don't understand the purpose of the noinline maker. If it's not \nnecessary to inline, we can just leave it off, but there is no need to \noutright prevent inlining AFAICT.\n\nI don't know why you changed the if/else sequences. Before, the \nsequence was effectively\n\nif (forward)\n{\n ...\n}\nelse if (backward)\n{\n ...\n}\nelse\n{\n /* it's no movement */\n}\n\nNow it's changed to\n\nif (no movement)\n{\n ...\n return;\n}\n\nif (forward)\n{\n ...\n}\nelse\n{\n Assert(backward);\n ...\n}\n\nSure, that's the same thing, but it looks less elegant to me.\n\n\n\n",
"msg_date": "Thu, 5 Jan 2023 14:52:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 8:52 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Ok, let's look through these patches starting from the top then.\n>\n> v4-0001-Add-no-movement-scan-helper.patch\n>\n> This makes sense overall; there is clearly some duplicate code that can\n> be unified.\n>\n> It appears that during your rebasing you have effectively reverted your\n> earlier changes that have been committed as\n> 8e1db29cdbbd218ab6ba53eea56624553c3bef8c. You should undo that.\n\nThanks. I think I have addressed this.\nI've attached a rebased v5.\n\n> I don't understand the purpose of the noinline maker. If it's not\n> necessary to inline, we can just leave it off, but there is no need to\n> outright prevent inlining AFAICT.\n>\n\nI have removed it.\n\n> I don't know why you changed the if/else sequences. Before, the\n> sequence was effectively\n>\n> if (forward)\n> {\n> ...\n> }\n> else if (backward)\n> {\n> ...\n> }\n> else\n> {\n> /* it's no movement */\n> }\n>\n> Now it's changed to\n>\n> if (no movement)\n> {\n> ...\n> return;\n> }\n>\n> if (forward)\n> {\n> ...\n> }\n> else\n> {\n> Assert(backward);\n> ...\n> }\n>\n> Sure, that's the same thing, but it looks less elegant to me.\n\nIn this commit, you could keep the original ordering of if statements. I\npreferred no movement scan first because then backwards and forwards\nscans' code is physically closer to the rest of the code without the\nintrusion of the no movement scan code.\n\nUltimately, the refactor (in later patches) flips the ordering of if\nstatements at the top from\n if (scan direction)\n to\n if (initial or continue)\nand this isn't a very interesting distinction for no movement scans. By\ndealing with no movement scan at the top, I didn't have to handle no\nmovement scans in the initial and continue branches in the new structure.\n\nAlso, I will note that patches 4-6 at least and perhaps 4-7 do not make\nsense as separate commits. I separated them for ease of review. Each is\ncorrect and passes tests but is not really an improvement without the\nothers.\n\n- Melanie",
"msg_date": "Tue, 10 Jan 2023 15:31:05 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On 10.01.23 21:31, Melanie Plageman wrote:\n> On Thu, Jan 5, 2023 at 8:52 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Ok, let's look through these patches starting from the top then.\n>>\n>> v4-0001-Add-no-movement-scan-helper.patch\n>>\n>> This makes sense overall; there is clearly some duplicate code that can\n>> be unified.\n>>\n>> It appears that during your rebasing you have effectively reverted your\n>> earlier changes that have been committed as\n>> 8e1db29cdbbd218ab6ba53eea56624553c3bef8c. You should undo that.\n> \n> Thanks. I think I have addressed this.\n> I've attached a rebased v5.\n\nIn your v2 patch, you remove these assertions:\n\n- /* check that rs_cindex is in sync */\n- Assert(scan->rs_cindex < scan->rs_ntuples);\n- Assert(lineoff == scan->rs_vistuples[scan->rs_cindex]);\n\nIs that intentional?\n\nI don't see any explanation, or some other equivalent code appearing \nelsewhere to replace this.\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 12:04:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Thu, 19 Jan 2023 at 00:04, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> In your v2 patch, you remove these assertions:\n>\n> - /* check that rs_cindex is in sync */\n> - Assert(scan->rs_cindex < scan->rs_ntuples);\n> - Assert(lineoff == scan->rs_vistuples[scan->rs_cindex]);\n>\n> Is that intentional?\n>\n> I don't see any explanation, or some other equivalent code appearing\n> elsewhere to replace this.\n\nI guess it's because those asserts are not relevant unless\nheapgettup_no_movement() is being called from heapgettup_pagemode().\nMaybe they can be put back along the lines of:\n\nAssert((scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) == 0 ||\nscan->rs_cindex < scan->rs_ntuples);\nAssert((scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) == 0 || lineoff ==\nscan->rs_vistuples[scan->rs_cindex]);\n\nbut it probably would be cleaner to just do an: if\n(scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) { Assert(...);\nAssert(...}; }\n\nThe only issue I see with that is that we don't seem to have anywhere\nin the regression tests that call heapgettup_no_movement() when\nrs_flags have SO_ALLOW_PAGEMODE. At least, adding an elog(NOTICE) to\nheapgettup() just before calling heapgettup_no_movement() does not\nseem to cause make check to fail. I wonder if any series of SQL\ncommands would allow us to call heapgettup_no_movement() from\nheapgettup()?\n\nI think heapgettup_no_movement() also needs a header comment more\nalong the lines of:\n\n/*\n * heapgettup_no_movement\n * Helper function for NoMovementScanDirection direction for\nheapgettup() and\n * heapgettup_pagemode.\n */\n\nI pushed the pgindent stuff that v5-0001 did along with some additions\nto typedefs.list so that further runs could be done more easily as\nchanges are made to these patches.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 Jan 2023 00:08:15 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "Thanks for taking a look!\n\nOn Mon, Jan 23, 2023 at 6:08 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 19 Jan 2023 at 00:04, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > In your v2 patch, you remove these assertions:\n> >\n> > - /* check that rs_cindex is in sync */\n> > - Assert(scan->rs_cindex < scan->rs_ntuples);\n> > - Assert(lineoff == scan->rs_vistuples[scan->rs_cindex]);\n> >\n> > Is that intentional?\n> >\n> > I don't see any explanation, or some other equivalent code appearing\n> > elsewhere to replace this.\n>\n> I guess it's because those asserts are not relevant unless\n> heapgettup_no_movement() is being called from heapgettup_pagemode().\n> Maybe they can be put back along the lines of:\n>\n> Assert((scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) == 0 ||\n> scan->rs_cindex < scan->rs_ntuples);\n> Assert((scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) == 0 || lineoff ==\n> scan->rs_vistuples[scan->rs_cindex]);\n>\n> but it probably would be cleaner to just do an: if\n> (scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) { Assert(...);\n> Assert(...}; }\n\nI prefer the first method and have implemented that in attached v6.\n\n> The only issue I see with that is that we don't seem to have anywhere\n> in the regression tests that call heapgettup_no_movement() when\n> rs_flags have SO_ALLOW_PAGEMODE. At least, adding an elog(NOTICE) to\n> heapgettup() just before calling heapgettup_no_movement() does not\n> seem to cause make check to fail. I wonder if any series of SQL\n> commands would allow us to call heapgettup_no_movement() from\n> heapgettup()?\n\nSo, the places in which we set scan direction to no movement include:\n- explain analyze on a ctas with no data\n EXPLAIN ANALYZE CREATE TABLE foo AS SELECT 1 WITH NO DATA;\n However, in standard_ExecutorRun() we only call ExecutePlan() if the\n ScanDirection is not no movement, so this wouldn't hit our code\n- PortalRunSelect\n- PersistHoldablePortal()\n\nI can't say I know enough about portals currently to design a test that\nwill hit this code, but I will poke around some more.\n\n> I think heapgettup_no_movement() also needs a header comment more\n> along the lines of:\n>\n> /*\n> * heapgettup_no_movement\n> * Helper function for NoMovementScanDirection direction for\n> heapgettup() and\n> * heapgettup_pagemode.\n> */\n\nI've added a comment but I didn't include the function name in it -- I\nfind it repetitive when the comments above functions do that -- however,\nI'm not strongly attached to that.\n\n> I pushed the pgindent stuff that v5-0001 did along with some additions\n> to typedefs.list so that further runs could be done more easily as\n> changes are made to these patches.\n\nCool!\n\n- Melanie",
"msg_date": "Tue, 24 Jan 2023 16:17:23 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 04:17:23PM -0500, Melanie Plageman wrote:\n> Thanks for taking a look!\n> \n> On Mon, Jan 23, 2023 at 6:08 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 19 Jan 2023 at 00:04, Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > > In your v2 patch, you remove these assertions:\n> > >\n> > > - /* check that rs_cindex is in sync */\n> > > - Assert(scan->rs_cindex < scan->rs_ntuples);\n> > > - Assert(lineoff == scan->rs_vistuples[scan->rs_cindex]);\n> > >\n> > > Is that intentional?\n> > >\n> > > I don't see any explanation, or some other equivalent code appearing\n> > > elsewhere to replace this.\n> >\n> > I guess it's because those asserts are not relevant unless\n> > heapgettup_no_movement() is being called from heapgettup_pagemode().\n> > Maybe they can be put back along the lines of:\n> >\n> > Assert((scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) == 0 ||\n> > scan->rs_cindex < scan->rs_ntuples);\n> > Assert((scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) == 0 || lineoff ==\n> > scan->rs_vistuples[scan->rs_cindex]);\n> >\n> > but it probably would be cleaner to just do an: if\n> > (scan->rs_base.rs_flags & SO_ALLOW_PAGEMODE) { Assert(...);\n> > Assert(...}; }\n> \n> I prefer the first method and have implemented that in attached v6.\n> \n> > The only issue I see with that is that we don't seem to have anywhere\n> > in the regression tests that call heapgettup_no_movement() when\n> > rs_flags have SO_ALLOW_PAGEMODE. At least, adding an elog(NOTICE) to\n> > heapgettup() just before calling heapgettup_no_movement() does not\n> > seem to cause make check to fail. I wonder if any series of SQL\n> > commands would allow us to call heapgettup_no_movement() from\n> > heapgettup()?\n> \n> So, the places in which we set scan direction to no movement include:\n> - explain analyze on a ctas with no data\n> EXPLAIN ANALYZE CREATE TABLE foo AS SELECT 1 WITH NO DATA;\n> However, in standard_ExecutorRun() we only call ExecutePlan() if the\n> ScanDirection is not no movement, so this wouldn't hit our code\n> - PortalRunSelect\n> - PersistHoldablePortal()\n> \n> I can't say I know enough about portals currently to design a test that\n> will hit this code, but I will poke around some more.\n \nI don't think we can write a test for this afterall. I've started\nanother thread on the topic over here:\n\nhttps://www.postgresql.org/message-id/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY%3D_T8QEqZqOL02rpi2bw%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 24 Jan 2023 19:58:43 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "\"On Wed, 25 Jan 2023 at 10:17, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I've added a comment but I didn't include the function name in it -- I\n> find it repetitive when the comments above functions do that -- however,\n> I'm not strongly attached to that.\n\nI think the general format for header comments is:\n\n/*\n * <function name>\n *\\t\\t<brief summary of what function does>\n *\n * [Further details]\n */\n\nWe've certainly got places that don't follow that, but I don't think\nthat's any reason to have no comment or invent some new format.\n\nheapam.c seems to have some other format where we do: \"<function name>\n- <brief summary of what function does>\". I generally just try to copy\nthe style from the surrounding code. I think generally, people won't\nargue if you follow the style from the surrounding code, but there'd\nbe exceptions to that, I'm sure.\n\nI'll skip further review of 0001 here as the whole\nScanDirectionNoMovement case is being discussed on the other thread.\n\nv6-0002:\n\n1. heapgettup_initial_block() needs a header comment to mention what\nit does and what it returns. It would be good to make it obvious that\nit returns InvalidBlockNumber when there are no blocks to scan.\n\n2. After heapgettup_initial_block(), you're checking \"if (block ==\nInvalidBlockNumber). It might be worth a mention something like\n\n/*\n * Check if we got to the end of the scan already. This could happen for\n * an empty relation or if parallel workers scanned everything before we\n * got a chance.\n */\n\nthe backward scan comment wouldn't mention parallel workers.\n\nv6-0003:\n\n3. Can you explain why you removed the snapshot local variable in heapgettup()?\n\n4. I think it might be a good idea to use unlikely() in if\n(!scan->rs_inited). The idea is to help coax the compiler into moving\nthat code off to a cold path. That's likely especially important if\nheapgettup_initial_block is inlined, which I see it is marked as.\n\nv6-0004:\n\n5. heapgettup_start_page() and heapgettup_continue_page() both need a\nheader comment to explain what they do and what the inputs and output\narguments are.\n\n6. I'm not too sure what the following comment means:\n\n/* block and lineoff now reference the physically next tid */\n\n\"block\" is just a parameter to the function and its value is not\nadjusted. The comment makes it sound like something was changed.\n\n(I think these might be just not well updated from having split this\nout from the 0006 patch as the same comment makes more sense in 0006)\n\nv6-0005:\n\n7. heapgettup_advance_block() needs a header comment.\n\n8. Is there a reason why heapgettup_advance_block() handle backward\nscans first? I'd expect you should just follow the lead of the other\nfunctions and do ScanDirectionIsForward first.\n\nv6-0006\n\nDavid\n\n\n",
"msg_date": "Sat, 28 Jan 2023 16:34:27 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "v7 attached\n\nOn Fri, Jan 27, 2023 at 10:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> \"On Wed, 25 Jan 2023 at 10:17, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I've added a comment but I didn't include the function name in it -- I\n> > find it repetitive when the comments above functions do that -- however,\n> > I'm not strongly attached to that.\n>\n> I think the general format for header comments is:\n>\n> /*\n> * <function name>\n> *\\t\\t<brief summary of what function does>\n> *\n> * [Further details]\n> */\n>\n> We've certainly got places that don't follow that, but I don't think\n> that's any reason to have no comment or invent some new format.\n>\n> heapam.c seems to have some other format where we do: \"<function name>\n> - <brief summary of what function does>\". I generally just try to copy\n> the style from the surrounding code. I think generally, people won't\n> argue if you follow the style from the surrounding code, but there'd\n> be exceptions to that, I'm sure.\n\nI have followed the same convention as the other functions in heapam.c\nin the various helper functions comments I've added in this version.\n\n> v6-0002:\n>\n> 1. heapgettup_initial_block() needs a header comment to mention what\n> it does and what it returns. It would be good to make it obvious that\n> it returns InvalidBlockNumber when there are no blocks to scan.\n\nI've done this.\n\n>\n> 2. After heapgettup_initial_block(), you're checking \"if (block ==\n> InvalidBlockNumber). It might be worth a mention something like\n>\n> /*\n> * Check if we got to the end of the scan already. This could happen for\n> * an empty relation or if parallel workers scanned everything before we\n> * got a chance.\n> */\n>\n> the backward scan comment wouldn't mention parallel workers.\n\nI've done this as well.\n\n>\n> v6-0003:\n>\n> 3. Can you explain why you removed the snapshot local variable in heapgettup()?\n\nIn the subsequent commit, the helpers I add call TestForOldSnapshot(),\nand I didn't want to pass in the snapshot as a separate parameter since\nI already need to pass the scan descriptor. I thought it was confusing\nto have a local variable (snapshot) used in some places and the one in\nthe scan used in others. This \"streamlining\" commit also reduces the\nnumber of times the snapshot variable is used, making it less necessary\nto have a local variable.\n\nI didn't remove the snapshot local variable in the same commit as adding\nthe helpers because I thought it made the diff harder to understand (for\nreview, the final commit should likely not be separate patches).\n\n> 4. I think it might be a good idea to use unlikely() in if\n> (!scan->rs_inited). The idea is to help coax the compiler into moving\n> that code off to a cold path. That's likely especially important if\n> heapgettup_initial_block is inlined, which I see it is marked as.\n\nI've gone ahead and added unlikely. However, should I perhaps skip\ninlining the heapgettup_initial_block() function?\n\n> v6-0004:\n>\n> 5. heapgettup_start_page() and heapgettup_continue_page() both need a\n> header comment to explain what they do and what the inputs and output\n> arguments are.\n\nI've added these. I've also removed an unused parameter to both, block.\n\n>\n> 6. I'm not too sure what the following comment means:\n>\n> /* block and lineoff now reference the physically next tid */\n>\n> \"block\" is just a parameter to the function and its value is not\n> adjusted. The comment makes it sound like something was changed.\n>\n> (I think these might be just not well updated from having split this\n> out from the 0006 patch as the same comment makes more sense in 0006)\n\nYes, that is true. I've updated it to just mention lineoff.\n\n> v6-0005:\n>\n> 7. heapgettup_advance_block() needs a header comment.\n>\n> 8. Is there a reason why heapgettup_advance_block() handle backward\n> scans first? I'd expect you should just follow the lead of the other\n> functions and do ScanDirectionIsForward first.\n\nThe reason I do this is that backwards scans cannot be parallel, so\nhandling backwards scans first let me return, then handle parallel\nscans, then forward scans. This reduced the level of nesting/if\nstatements for all of the code in this function.\n\n- Melanie",
"msg_date": "Mon, 30 Jan 2023 18:18:45 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Tue, 31 Jan 2023 at 12:18, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Fri, Jan 27, 2023 at 10:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > 4. I think it might be a good idea to use unlikely() in if\n> > (!scan->rs_inited). The idea is to help coax the compiler into moving\n> > that code off to a cold path. That's likely especially important if\n> > heapgettup_initial_block is inlined, which I see it is marked as.\n>\n> I've gone ahead and added unlikely. However, should I perhaps skip\n> inlining the heapgettup_initial_block() function?\n\nI'm not sure of the exact best combination of functions to mark as\ninline. I did try the v7 patchset from 0002 to 0006 on top of c2891175\nand I found that the performance is slightly better after removing\ninline from all 4 of the helper functions. However, I think if we do\nunlikely() and the function is moved into the cold path then it\nmatters less if it's inlined.\n\ncreate table a (a int);\ninsert into a select x from generate_series(1,1000000)x;\nvacuum freeze a;\n\n$ cat seqscan.sql\nselect * from a where a = 0;\n$ cat countall.sql\nselect count(*) from a;\n\nseqscan.sql filters out all rows and countall.sql returns all rows and\ndoes an aggregate so we don't have to return all those in the query.\n\nmax_parallel_workers_per_gather=0;\n\nmaster\n$ psql -c \"select pg_prewarm('a')\" postgres > /dev/null && for i in\n{1..3}; do pgbench -n -f seqscan.sql -M prepared -T 10 postgres | grep\ntps; done\ntps = 25.464091 (without initial connection time)\ntps = 25.117001 (without initial connection time)\ntps = 25.141646 (without initial connection time)\n\n$ psql -c \"select pg_prewarm('a')\" postgres > /dev/null && for i in\n{1..3}; do pgbench -n -f countall.sql -M prepared -T 10 postgres |\ngrep tps; done\ntps = 27.906307 (without initial connection time)\ntps = 27.527580 (without initial connection time)\ntps = 27.563035 (without initial connection time)\n\nmaster + v7\n$ psql -c \"select pg_prewarm('a')\" postgres > /dev/null && for i in\n{1..3}; do pgbench -n -f seqscan.sql -M prepared -T 10 postgres | grep\ntps; done\ntps = 25.920370 (without initial connection time)\ntps = 25.680052 (without initial connection time)\ntps = 24.988895 (without initial connection time)\n\n$ psql -c \"select pg_prewarm('a')\" postgres > /dev/null && for i in\n{1..3}; do pgbench -n -f countall.sql -M prepared -T 10 postgres |\ngrep tps; done\ntps = 33.783122 (without initial connection time)\ntps = 33.248571 (without initial connection time)\ntps = 33.512984 (without initial connection time)\n\nmaster + v7 + inline removed\n$ psql -c \"select pg_prewarm('a')\" postgres > /dev/null && for i in\n{1..3}; do pgbench -n -f seqscan.sql -M prepared -T 10 postgres | grep\ntps; done\ntps = 27.680115 (without initial connection time)\ntps = 26.418562 (without initial connection time)\ntps = 26.166800 (without initial connection time)\n\n$ psql -c \"select pg_prewarm('a')\" postgres > /dev/null && for i in\n{1..3}; do pgbench -n -f countall.sql -M prepared -T 10 postgres |\ngrep tps; done\ntps = 33.948588 (without initial connection time)\ntps = 33.684966 (without initial connection time)\ntps = 33.946700 (without initial connection time)\n\nYou can see that v7 helps countall.sql quite a bit. It seems to also\nhelp a little bit with seqscan.sql. v7 + inline removed makes\nseqscan.sql a decent amount faster than both master and master + v7.\n\nDavid\n\n\n",
"msg_date": "Wed, 1 Feb 2023 19:06:19 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Tue, 31 Jan 2023 at 12:18, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> v7 attached\n\nI've been looking over the v7-0002 patch today and I did make a few\nadjustments to heapgettup_initial_block() as I would prefer to see the\nbranching of each of all these helper functions follow the pattern of:\n\nif (<forward scan>)\n{\n if (<parallel scan>)\n <parallel stuff>\n else\n <serial stuff>\n}\nelse\n{\n <backwards serial stuff>\n}\n\nwhich wasn't quite what the function was doing.\n\nAlong the way, I noticed that 0002 has a subtle bug that does not seem\nto be present once the remaining patches are applied. I think I'm\nhappier to push these along the lines of how you have them in the\npatches, so I've held off pushing for now due to the bug and the\nchange I had to make to fix it.\n\nThe problem is around the setting of scan->rs_inited = true; you've\nmoved that into heapgettup_initial_block() and you've correctly not\ninitialised the scan for empty tables when you return\nInvalidBlockNumber, however, you've not correctly considered the fact\nthat table_block_parallelscan_nextpage() could also return\nInvalidBlockNumber if the parallel workers manage to grab all of the\nblocks before the current process gets the first block. I don't know\nfor sure, but it looks like this could cause problems when\nheapgettup() or heapgettup_pagemode() got called again for a rescan.\nWe'd have returned the NULL tuple to indicate that no further tuples\nexist, but we'll have left rs_inited set to true which looks like\nit'll cause issues.\n\nI wondered if it might be better to do the scan->rs_inited = true; in\nheapgettup() and heapgettup_pagemode() instead. The attached v8 patch\ndoes it this way. Despite this fixing that bug, I think this might be\na slightly better division of duties.\n\nIf you're ok with the attached (and everyone else is too), then I can\npush it in the (NZ) morning. The remaining patches would need to be\nrebased due to my changes.\n\nDavid",
"msg_date": "Thu, 2 Feb 2023 00:21:20 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Thu, Feb 02, 2023 at 12:21:20AM +1300, David Rowley wrote:\n> On Tue, 31 Jan 2023 at 12:18, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > v7 attached\n> \n> I've been looking over the v7-0002 patch today and I did make a few\n> adjustments to heapgettup_initial_block() as I would prefer to see the\n> branching of each of all these helper functions follow the pattern of:\n> \n> if (<forward scan>)\n> {\n> if (<parallel scan>)\n> <parallel stuff>\n> else\n> <serial stuff>\n> }\n> else\n> {\n> <backwards serial stuff>\n> }\n> \n> which wasn't quite what the function was doing.\n\nI'm fine with this. One code comment about the new version inline.\n\n> Along the way, I noticed that 0002 has a subtle bug that does not seem\n> to be present once the remaining patches are applied. I think I'm\n> happier to push these along the lines of how you have them in the\n> patches, so I've held off pushing for now due to the bug and the\n> change I had to make to fix it.\n> \n> The problem is around the setting of scan->rs_inited = true; you've\n> moved that into heapgettup_initial_block() and you've correctly not\n> initialised the scan for empty tables when you return\n> InvalidBlockNumber, however, you've not correctly considered the fact\n> that table_block_parallelscan_nextpage() could also return\n> InvalidBlockNumber if the parallel workers manage to grab all of the\n> blocks before the current process gets the first block. I don't know\n> for sure, but it looks like this could cause problems when\n> heapgettup() or heapgettup_pagemode() got called again for a rescan.\n> We'd have returned the NULL tuple to indicate that no further tuples\n> exist, but we'll have left rs_inited set to true which looks like\n> it'll cause issues.\n\nAh, yes. In the later patches in the series, I handle all end of scan\ncases (regardless of whether or not there was a beginning) in a single\nplace at the end of the function. There I release the buffer and reset\nall state -- including setting rs_inited to false. So, that made it okay\nto set rs_inited to true in heapgettup_initial_block().\n\nWhen splitting it up, I made a mistake and missed the case you\nmentioned. Thanks for catching that!\n\nFWIW, I like setting rs_inited in heapgettup_initial_block() better in\nthe final refactor, but I agree with you that in this patch on its own\nit is better in the body of heapgettup() and heapgettup_pagemode().\n \n> I wondered if it might be better to do the scan->rs_inited = true; in\n> heapgettup() and heapgettup_pagemode() instead. The attached v8 patch\n> does it this way. Despite this fixing that bug, I think this might be\n> a slightly better division of duties.\n\nLGTM.\n\n> From cbd37463bdaa96afed4c7c739c8e91b770a9f8a7 Mon Sep 17 00:00:00 2001\n> From: David Rowley <dgrowley@gmail.com>\n> Date: Wed, 1 Feb 2023 19:35:16 +1300\n> Subject: [PATCH v8] Refactor heapam.c adding heapgettup_initial_block function\n> \n> Here we adjust heapgettup() and heapgettup_pagemode() to move the code\n> that fetches the first block out into a helper function. This removes\n> some code duplication.\n> \n> Author: Melanie Plageman\n> Reviewed-by: David Rowley\n> Discussion: https://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com\n> ---\n> src/backend/access/heap/heapam.c | 225 ++++++++++++++-----------------\n> 1 file changed, 103 insertions(+), 122 deletions(-)\n> \n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 0a8bac25f5..40168cc9ca 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -483,6 +483,67 @@ heapgetpage(TableScanDesc sscan, BlockNumber block)\n> \tscan->rs_ntuples = ntup;\n> }\n> \n> +/*\n> + * heapgettup_initial_block - return the first BlockNumber to scan\n> + *\n> + * Returns InvalidBlockNumber when there are no blocks to scan. This can\n> + * occur with empty tables and in parallel scans when parallel workers get all\n> + * of the pages before we can get a chance to get our first page.\n> + */\n> +static BlockNumber\n> +heapgettup_initial_block(HeapScanDesc scan, ScanDirection dir)\n> +{\n> +\tAssert(!scan->rs_inited);\n> +\n> +\t/* When there are no pages to scan, return InvalidBlockNumber */\n> +\tif (scan->rs_nblocks == 0 || scan->rs_numblocks == 0)\n> +\t\treturn InvalidBlockNumber;\n> +\n> +\tif (ScanDirectionIsForward(dir))\n> +\t{\n> +\t\t/* serial scan */\n> +\t\tif (scan->rs_base.rs_parallel == NULL)\n> +\t\t\treturn scan->rs_startblock;\n\nI believe this else is superfluous since we returned above.\n\n> +\t\telse\n> +\t\t{\n> +\t\t\t/* parallel scan */\n> +\t\t\ttable_block_parallelscan_startblock_init(scan->rs_base.rs_rd,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t scan->rs_parallelworkerdata,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel);\n> +\n> +\t\t\t/* may return InvalidBlockNumber if there are no more blocks */\n> +\t\t\treturn table_block_parallelscan_nextpage(scan->rs_base.rs_rd,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t scan->rs_parallelworkerdata,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel);\n> +\t\t}\n> +\t}\n...\n> @@ -889,62 +892,40 @@ heapgettup_pagemode(HeapScanDesc scan,\n> -\t\tif (!scan->rs_inited)\n> -\t\t{\n> -\t\t\tlineindex = lines - 1;\n> -\t\t\tscan->rs_inited = true;\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> +\t\t\tpage = BufferGetPage(scan->rs_cbuf);\n> +\t\t\tTestForOldSnapshot(scan->rs_base.rs_snapshot, scan->rs_base.rs_rd, page);\n> +\t\t\tlines = scan->rs_ntuples;\n> \t\t\tlineindex = scan->rs_cindex - 1;\n> \t\t}\n> -\t\t/* block and lineindex now reference the previous visible tid */\n\nI think this is an unintentional diff.\n\n> \n> +\t\t/* block and lineindex now reference the previous visible tid */\n> \t\tlinesleft = lineindex + 1;\n> \t}\n\n- Melanie\n\n\n",
"msg_date": "Wed, 1 Feb 2023 16:12:54 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Thu, 2 Feb 2023 at 10:12, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> FWIW, I like setting rs_inited in heapgettup_initial_block() better in\n> the final refactor, but I agree with you that in this patch on its own\n> it is better in the body of heapgettup() and heapgettup_pagemode().\n\nWe can reconsider that when we get to that patch. It just felt a bit\nugly to add an InvalidBlockNumber check after calling\ntable_block_parallelscan_nextpage()\n\n> I believe this else is superfluous since we returned above.\n\nTBH, that's on purpose. I felt that it just looked better that way as\nthe code all fitted onto my screen. I think if the function was\nlonger and people had to scroll down to read it, it can often be\nbetter to return and reduce the nesting. This allows you to mentally\nnot that a certain case is handled above. However, since all these\nhelper functions seem to fit onto a screen without too much trouble,\nit just seems better (to me) if they all follow the format that I\nmentioned earlier. I might live to regret that as we often see\nget-rid-of-useless-else-clause patches coming up. I'm starting to\nwonder if someone's got some alarm that goes off every time one gets\ncommitted, but we'll see. I'd much rather have consistency between the\nhelper functions than save a few bytes on tab characters. It would be\ndifferent if the indentation were shifting things too far right, but\nthat's not going to happen in a function that all fits onto a screen\nat once.\n\nI've attached a version of the next patch in the series. I admit to\nmaking a couple of adjustments. I couldn't bring myself to remove the\nsnapshot local variable in this commit. We can deal with that when we\ncome to it in whichever patch that needs to be changed in. The only\nother thing I really did was question your use of rs_cindex to store\nthe last OffsetNumber. I ended up adding a new field which slots into\nthe padding between a bool and BlockNumber field named rs_coffset for\nthis purpose. I noticed what Andres wrote [1] earlier in this thread\nabout that, so thought we should move away from looking at the last\ntuple to get this number\n\nI've attached the rebased and updated patch.\n\nDavid\n\n[1] https://postgr.es/m/20221101010948.hsf33emgnwzvil4a@awork3.anarazel.de",
"msg_date": "Thu, 2 Feb 2023 19:00:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Thu, Feb 02, 2023 at 07:00:37PM +1300, David Rowley wrote:\n> I've attached a version of the next patch in the series. I admit to\n> making a couple of adjustments. I couldn't bring myself to remove the\n> snapshot local variable in this commit. We can deal with that when we\n> come to it in whichever patch that needs to be changed in.\n\nThat seems fine to keep the diff easy to understand. Also,\nheapgettup_pagemode() didn't have a snapshot local variable either.\n\n> The only other thing I really did was question your use of rs_cindex\n> to store the last OffsetNumber. I ended up adding a new field which\n> slots into the padding between a bool and BlockNumber field named\n> rs_coffset for this purpose. I noticed what Andres wrote [1] earlier\n> in this thread about that, so thought we should move away from looking\n> at the last tuple to get this number\n> \n> [1] https://postgr.es/m/20221101010948.hsf33emgnwzvil4a@awork3.anarazel.de\n\nSo, what Andres had said was: \n\n> Hm. Finding the next offset via rs_ctup doesn't seem quite right. For one,\n> it's not actually that cheap to extract the offset from an ItemPointer because\n> of the the way we pack it into ItemPointerData.\n\nBecause I was doing this in an earlier version:\n\n> > + HeapTuple tuple = &(scan->rs_ctup);\n\nAnd then in the later part of the code got tuple->t_self.\n\nI did this because the code in master does this:\n\n lineoff =\t\t\t/* previous offnum */\n Min(lines,\n OffsetNumberPrev(ItemPointerGetOffsetNumber(&(tuple->t_self))));\n\nSo I figured it was the same. Based on Andres' feedback, I switched to\nsaving the offset number in the scan descriptor and figured I could\nreuse rs_cindex since it is larger than an OffsetNumber.\n\nYour code also switches to saving the OffsetNumber -- just in a separate\nvariable of OffsetNumber type. I am fine with this if it your rationale\nis that it is not a good idea to store a smaller number in a larger\ndatatype. However, the benefit I saw in reusing rs_cindex is that we\ncould someday converge the code for heapgettup() and\nheapgettup_pagemode() even more. Even in my final refactor, there is a\nlot of duplicate code between the two.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 2 Feb 2023 12:22:59 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Fri, 3 Feb 2023 at 06:23, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> Your code also switches to saving the OffsetNumber -- just in a separate\n> variable of OffsetNumber type. I am fine with this if it your rationale\n> is that it is not a good idea to store a smaller number in a larger\n> datatype. However, the benefit I saw in reusing rs_cindex is that we\n> could someday converge the code for heapgettup() and\n> heapgettup_pagemode() even more. Even in my final refactor, there is a\n> lot of duplicate code between the two.\n\nI was more concerned about the reuse of an unrelated field. I'm\nstruggling to imagine why using the separate field would cause any\nissues around not being able to reduce the code duplication any more\nthan we otherwise would. Surely in one case you need to get the offset\nby indexing the rs_vistuples[] array and the other is the offset\ndirectly. The only thing I can think of that would allow us not to\nhave a condition there would be if we populated the rs_vistuples[]\narray with 1..n. I doubt should do that and if we did, we could just\nuse the rs_cindex to index that without having to worry that we're\nusing an unrelated field for something.\n\nI've pushed all but the final 2 patches now.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Feb 2023 15:26:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Fri, 3 Feb 2023 at 15:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've pushed all but the final 2 patches now.\n\nI just pushed the final patch in the series. I held back on moving\nthe setting of rs_inited back into the heapgettup_initial_block()\nhelper function as I wondered if we should even keep that field.\n\nIt seems that rs_cblock == InvalidBlockNumber in all cases where\nrs_inited == false, so maybe it's better just to use that as a\ncondition to check if the scan has started or not. I've attached a\npatch which does that.\n\nI ended up adjusting HeapScanDescData more than what is minimally\nrequired to remove rs_inited. I wondered if rs_cindex should be closer\nto rs_cblock in the struct so that we're more likely to be adjusting\nthe same cache line than if that field were closer to the end of the\nstruct. We don't need rs_coffset and rs_cindex at the same time, so I\nmade it a union. I see that the struct is still 712 bytes before and\nafter this change. I've not yet tested to see if there are any\nperformance gains to this change.\n\nDavid",
"msg_date": "Tue, 7 Feb 2023 17:40:13 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Tue, Feb 07, 2023 at 05:40:13PM +1300, David Rowley wrote:\n> On Fri, 3 Feb 2023 at 15:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've pushed all but the final 2 patches now.\n> \n> I just pushed the final patch in the series.\n\nCool!\n\n> I held back on moving the setting of rs_inited back into the\n> heapgettup_initial_block() helper function as I wondered if we should\n> even keep that field.\n> \n> It seems that rs_cblock == InvalidBlockNumber in all cases where\n> rs_inited == false, so maybe it's better just to use that as a\n> condition to check if the scan has started or not. I've attached a\n> patch which does that.\n> \n> I ended up adjusting HeapScanDescData more than what is minimally\n> required to remove rs_inited. I wondered if rs_cindex should be closer\n> to rs_cblock in the struct so that we're more likely to be adjusting\n> the same cache line than if that field were closer to the end of the\n> struct. We don't need rs_coffset and rs_cindex at the same time, so I\n> made it a union. I see that the struct is still 712 bytes before and\n> after this change. I've not yet tested to see if there are any\n> performance gains to this change.\n> \n\nI like the idea of using a union.\n\n> diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c\n> index 7eb79cee58..e171d6e38b 100644\n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -321,13 +321,15 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)\n> \t}\n> \n> \tscan->rs_numblocks = InvalidBlockNumber;\n> -\tscan->rs_inited = false;\n> \tscan->rs_ctup.t_data = NULL;\n> \tItemPointerSetInvalid(&scan->rs_ctup.t_self);\n> \tscan->rs_cbuf = InvalidBuffer;\n> \tscan->rs_cblock = InvalidBlockNumber;\n> \n> -\t/* page-at-a-time fields are always invalid when not rs_inited */\n> +\t/*\n> +\t * page-at-a-time fields are always invalid when\n> +\t * rs_cblock == InvalidBlockNumber\n> +\t */\n\nSo, I was wondering what we should do about initializing rs_coffset here\nsince it doesn't fall under \"don't initialize it because it is only used\nfor page-at-a-time mode\". It might not be required for us to initialize\nit in initscan, but we do bother to initialize other \"current scan\nstate\" fields. We could check if pagemode is enabled and initialize\nrs_coffset or rs_cindex depending on that.\n\nThen maybe the comment should call out the specific page-at-a-time\nfields that are automatically invalid? (e.g. rs_ntuples, rs_vistuples)\n\nI presume the point of the comment is to explain why those fields are\nnot being initialized here, which was a question I had when I looked at\ninitscan(), so it seems like we should make sure it explains that.\n\n> @@ -717,9 +720,9 @@ heapgettup_advance_block(HeapScanDesc scan, BlockNumber block, ScanDirection dir\n> * the scankeys.\n> *\n> * Note: when we fall off the end of the scan in either direction, we\n> - * reset rs_inited. This means that a further request with the same\n> - * scan direction will restart the scan, which is a bit odd, but a\n> - * request with the opposite scan direction will start a fresh scan\n> + * reset rs_cblock to InvalidBlockNumber. This means that a further request\n> + * with the same scan direction will restart the scan, which is a bit odd, but\n> + * a request with the opposite scan direction will start a fresh scan\n> * in the proper direction. The latter is required behavior for cursors,\n> * while the former case is generally undefined behavior in Postgres\n> * so we don't care too much.\n\nNot the fault of this patch, but I am having trouble parsing this\ncomment. What does restart the scan mean? I get that it is undefined\nbehavior, but it is confusing because it kind of sounds like a rescan\nwhich is not what it is (right?). And what exactly is a fresh scan? It\nis probably correct, I just don't really understand what it is saying.\nFeel free to ignore this aside, as I think your change is correctly\nupdating the comment.\n\n> @@ -2321,13 +2316,12 @@ heapam_scan_sample_next_block(TableScanDesc scan, SampleScanState *scanstate)\n> \t\t\tReleaseBuffer(hscan->rs_cbuf);\n> \t\thscan->rs_cbuf = InvalidBuffer;\n> \t\thscan->rs_cblock = InvalidBlockNumber;\n> -\t\thscan->rs_inited = false;\n> \n> \t\treturn false;\n> \t}\n> \n> \theapgetpage(scan, blockno);\n> -\thscan->rs_inited = true;\n> +\tAssert(hscan->rs_cblock != InvalidBlockNumber);\n\nQuite nice to have this assert.\n\n> diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> index 8d28bc93ef..c6efd59eb5 100644\n> --- a/src/include/access/heapam.h\n> +++ b/src/include/access/heapam.h\n> @@ -56,9 +56,18 @@ typedef struct HeapScanDescData\n> \t/* rs_numblocks is usually InvalidBlockNumber, meaning \"scan whole rel\" */\n> \n> \t/* scan current state */\n> -\tbool\t\trs_inited;\t\t/* false = scan not init'd yet */\n> -\tOffsetNumber rs_coffset;\t/* current offset # in non-page-at-a-time mode */\n> -\tBlockNumber rs_cblock;\t\t/* current block # in scan, if any */\n> +\tunion\n> +\t{\n> +\t\t/* current offset in non-page-at-a-time mode */\n> +\t\tOffsetNumber rs_coffset;\n> +\n> +\t\t/* current tuple's index in vistuples for page-at-a-time mode */\n> +\t\tint\t\t\trs_cindex;\n> +\t};\n\nWith the union up here, the comment about page-at-a-time mode members\nnear the bottom of the struct is now a bit misleading.\n\nThe rest of my thoughts about that are with that comment.\n\n> +\n> +\tBlockNumber rs_cblock;\t\t/* current block # in scan, or\n> +\t\t\t\t\t\t\t\t * InvalidBlockNumber when the scan is not yet\n> +\t\t\t\t\t\t\t\t * initialized */\n\nThe formatting of this comment is a bit difficult to read. Perhaps it\ncould go above the member?\n\nAlso, a few random other complaints about the comments in\nHeapScanDescData that are also not the fault of this patch:\n\n- why is this comment there twice?\n\t/* rs_numblocks is usually InvalidBlockNumber, meaning \"scan whole rel\" */\n\n- above the union it said (prior to this patch also)\n\t/* scan current state */\n\twhich is in contrast to \n\t/* state set up at initscan time */\n\tfrom the top, but, arguably, rs_strategy is set up at initscan time\n\n- who or what is NB?\n\t/* NB: if rs_cbuf is not InvalidBuffer, we hold a pin on that buffer */\n\n> \tBuffer\t\trs_cbuf;\t\t/* current buffer in scan, if any */\n> \t/* NB: if rs_cbuf is not InvalidBuffer, we hold a pin on that buffer */\n> \n> @@ -74,7 +83,6 @@ typedef struct HeapScanDescData\n> \tParallelBlockTableScanWorkerData *rs_parallelworkerdata;\n> \n> \t/* these fields only used in page-at-a-time mode and for bitmap scans */\n> -\tint\t\t\trs_cindex;\t\t/* current tuple's index in vistuples */\n> \tint\t\t\trs_ntuples;\t\t/* number of visible tuples on page */\n> \tOffsetNumber rs_vistuples[MaxHeapTuplesPerPage];\t/* their offsets */\n\nI would personally be okay with either a block comment somewhere nearby\ndescribing which members are used in page-at-a-time mode or individual\ncomments for each field that is used only in page-at-a-time mode (or\nsomething else, I just think the situation with the patch applied\ncurrently is confusing).\n\n- Melanie\n\n\n",
"msg_date": "Tue, 7 Feb 2023 15:41:27 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Wed, 8 Feb 2023 at 09:41, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> On Tue, Feb 07, 2023 at 05:40:13PM +1300, David Rowley wrote:\n> > I ended up adjusting HeapScanDescData more than what is minimally\n> > required to remove rs_inited. I wondered if rs_cindex should be closer\n> > to rs_cblock in the struct so that we're more likely to be adjusting\n> > the same cache line than if that field were closer to the end of the\n> > struct. We don't need rs_coffset and rs_cindex at the same time, so I\n> > made it a union. I see that the struct is still 712 bytes before and\n> > after this change. I've not yet tested to see if there are any\n> > performance gains to this change.\n> >\n>\n> I like the idea of using a union.\n\nUsing the tests mentioned in [1], I tested out\nremove_HeapScanDescData_rs_inited_field.patch. It's not looking very\npromising at all.\n\nseqscan.sql test:\n\nmaster (e2c78e7ab)\ntps = 27.769076 (without initial connection time)\ntps = 28.155233 (without initial connection time)\ntps = 26.990389 (without initial connection time)\n\nmaster + remove_HeapScanDescData_rs_inited_field.patch\ntps = 23.990490 (without initial connection time)\ntps = 23.450662 (without initial connection time)\ntps = 23.600194 (without initial connection time)\n\nmaster + remove_HeapScanDescData_rs_inited_field.patch without union\nHeapScanDescData change (just remove rs_inited field)\ntps = 24.419007 (without initial connection time)\ntps = 24.221389 (without initial connection time)\ntps = 24.187756 (without initial connection time)\n\n\ncountall.sql test:\n\nmaster (e2c78e7ab)\n\ntps = 33.999408 (without initial connection time)\ntps = 33.664292 (without initial connection time)\ntps = 33.869115 (without initial connection time)\n\nmaster + remove_HeapScanDescData_rs_inited_field.patch\ntps = 31.194316 (without initial connection time)\ntps = 30.804987 (without initial connection time)\ntps = 30.770236 (without initial connection time)\n\nmaster + remove_HeapScanDescData_rs_inited_field.patch without union\nHeapScanDescData change (just remove rs_inited field)\ntps = 32.626187 (without initial connection time)\ntps = 32.876362 (without initial connection time)\ntps = 32.481729 (without initial connection time)\n\nI don't really have any explanation for why this slows performance so\nmuch. My thoughts are that if the performance of scans is this\nsensitive to the order of the fields in the struct then it's an\nindependent project to learn out why and what we can realistically\nchange to get the best performance here.\n\n> So, I was wondering what we should do about initializing rs_coffset here\n> since it doesn't fall under \"don't initialize it because it is only used\n> for page-at-a-time mode\". It might not be required for us to initialize\n> it in initscan, but we do bother to initialize other \"current scan\n> state\" fields. We could check if pagemode is enabled and initialize\n> rs_coffset or rs_cindex depending on that.\n\nMaybe master should be initialising this field already. I didn't quite\nsee it as important as it's never used before rs_inited is set to\ntrue. Maybe setting it to InvalidOffsetNumber is a good idea just in\ncase something tries to use it before it gets set.\n\n> > * Note: when we fall off the end of the scan in either direction, we\n> > - * reset rs_inited. This means that a further request with the same\n> > - * scan direction will restart the scan, which is a bit odd, but a\n> > - * request with the opposite scan direction will start a fresh scan\n> > + * reset rs_cblock to InvalidBlockNumber. This means that a further request\n> > + * with the same scan direction will restart the scan, which is a bit odd, but\n> > + * a request with the opposite scan direction will start a fresh scan\n> > * in the proper direction. The latter is required behavior for cursors,\n> > * while the former case is generally undefined behavior in Postgres\n> > * so we don't care too much.\n>\n> Not the fault of this patch, but I am having trouble parsing this\n> comment. What does restart the scan mean? I get that it is undefined\n> behavior, but it is confusing because it kind of sounds like a rescan\n> which is not what it is (right?). And what exactly is a fresh scan? It\n> is probably correct, I just don't really understand what it is saying.\n> Feel free to ignore this aside, as I think your change is correctly\n> updating the comment.\n\nI struggled with this too. It just looks incorrect. As far as I see\nit, once the scan ends we do the same thing regardless of what the\nscan direction is. Maybe it's worth looking back at when that comment\nwas added and seeing if it was true when it was written and then see\nwhat changed. I think it's worth improving that independently.\n\nI think I'd like to focus on the cleanup stuff before looking into\ngetting rid of rs_inited. I'm not sure I'm going to get time to do the\nrequired performance tests to look into why removing rs_inited slows\nthings down so much.\n\n> Also, a few random other complaints about the comments in\n> HeapScanDescData that are also not the fault of this patch:\n>\n> - why is this comment there twice?\n> /* rs_numblocks is usually InvalidBlockNumber, meaning \"scan whole rel\" */\n\nSeems to date back to 2019. I'll push something shortly to remove the\nadditional one.\n\n> - above the union it said (prior to this patch also)\n> /* scan current state */\n> which is in contrast to\n> /* state set up at initscan time */\n> from the top, but, arguably, rs_strategy is set up at initscan time\n\nI'm not sure what the best thing to do about that is. Given the\nperformance numbers I showed above after removing the rs_inited field,\nI'd rather not move that field in the struct up to the other fields\nthat are set in initscan(). Perhaps you can suggest a patch which\nimproves the comments?\n\n> - who or what is NB?\n> /* NB: if rs_cbuf is not InvalidBuffer, we hold a pin on that buffer */\n\nLatin for \"note\". It's used quite commonly in our code.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpnA9SGp3OeXr4cYqX_w=NYN2YMzf2zfrABPNDsUqoNqw@mail.gmail.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 15:09:24 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
},
{
"msg_contents": "On Wed, 8 Feb 2023 at 15:09, David Rowley <dgrowleyml@gmail.com> wrote:\n> Using the tests mentioned in [1], I tested out\n> remove_HeapScanDescData_rs_inited_field.patch. It's not looking very\n> promising at all.\n\nIn light of the performance regression from removing the rs_inited\nfield, let's just forget doing that for now. It does not really seem\nthat important compared to the other work that's already been done.\n\nIf one of us gets time during the v17 cycle, then maybe we can revisit it then.\n\nI'll mark the patch as committed in the CF app.\n\nDavid\n\n\n",
"msg_date": "Sun, 19 Mar 2023 22:38:12 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: heapgettup refactoring"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWe currently do not provide any SQL functions for generating SCRAM \r\nsecrets, whereas we have this support for other passwords types \r\n(plaintext and md5 via `md5(password || username)`). If a user wants to \r\nbuild a SCRAM secret via SQL, they have to implement our SCRAM hashing \r\nfuncs on their own.\r\n\r\nHaving a set of SCRAM secret building functions would help in a few areas:\r\n\r\n1. Ensuring we have a SQL-equivalent of CREATE/ALTER ROLE ... PASSWORD \r\nwhere we can compute a pre-hashed password.\r\n\r\n2. Keeping a history file of user-stored passwords or checking against a \r\ncommon-password dictionary.\r\n\r\n3. Allowing users to build SQL-functions that can precompute SCRAM \r\nsecrets on a local server before sending it to a remote server.\r\n\r\nAttached is a (draft) patch that adds a function called \r\n\"scram_build_secret_sha256\" that can take 3 arguments:\r\n\r\n* password (text) - a plaintext password\r\n* salt (text) - a base64 encoded salt\r\n* iterations (int) - the number of iterations to hash the plaintext \r\npassword.\r\n\r\nThere are three variations of the function:\r\n\r\n1. password only -- this defers to the PG defaults for SCRAM\r\n2. password + salt -- this is useful for the password history / \r\ndictionary case to allow for a predictable way to check a password.\r\n3. password + salt + iterations -- this allows the user to modify the \r\nnumber of iterations to hash a password.\r\n\r\nThe design of the patch primarily delegates to the existing SCRAM secret \r\nbuilding code and provides a few wrapper functions around it that \r\nevaluate user input.\r\n\r\nThere are a few open items on this patch, i.e.:\r\n\r\n1. Location of the functions. I put them in\r\nsrc/backend/utils/adt/cryptohashfuncs.c as I wasn't sure where it would \r\nmake sense to have them (and they could easily go into their own file).\r\n\r\n2. I noticed a common set of base64 function calls that could possibly \r\nbe refactored into one; I left a TODO comment around that.\r\n\r\n3. More tests\r\n\r\n4. Docs -- if it seems like we're OK with including these functions, \r\nI'll write these up.\r\n\r\nPlease let me know if you have any questions. I'll add a CF entry for this.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\nP.S. I used this as a forcing function to get the meson build system set \r\nup and thus far I quite like it!",
"msg_date": "Mon, 31 Oct 2022 16:27:08 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "User functions for building SCRAM secrets"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n\n> Attached is a (draft) patch that adds a function called\n> \"scram_build_secret_sha256\" that can take 3 arguments:\n\nThis seems like a reasonable piece of functionality, I just have one\ncomment on the implementation.\n\n> * password (text) - a plaintext password\n> * salt (text) - a base64 encoded salt\n[…]\n> +\t/*\n> +\t * determine if this a valid base64 encoded string\n> +\t * TODO: look into refactoring the SCRAM decode code in libpq/auth-scram.c\n> +\t */\n> +\tsalt_str_dec_len = pg_b64_dec_len(strlen(salt_str_enc));\n> +\tsalt_str_dec = palloc(salt_str_dec_len);\n> +\tsalt_str_dec_len = pg_b64_decode(salt_str_enc, strlen(salt_str_enc),\n> +\t\t\t\t\t\t\t\tsalt_str_dec, salt_str_dec_len);\n> +\tif (salt_str_dec_len < 0)\n> +\t{\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_DATA_EXCEPTION),\n> +\t\t\t\t errmsg(\"invalid base64 encoded string\"),\n> +\t\t\t\t errhint(\"Use the \\\"encode\\\" function to convert to valid base64 string.\")));\n> +\t}\n\nInstead of going through all these machinations to base64-decode the\nsalt and tell the user off if they encoded it wrong, why not accept the\nbinary salt directly as a bytea?\n\n- ilmari\n\n\n",
"msg_date": "Mon, 31 Oct 2022 22:05:21 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 10/31/22 6:05 PM, Dagfinn Ilmari Mannsåker wrote:\r\n\r\n>> * password (text) - a plaintext password\r\n>> * salt (text) - a base64 encoded salt\r\n> […]\r\n>> +\t/*\r\n>> +\t * determine if this a valid base64 encoded string\r\n>> +\t * TODO: look into refactoring the SCRAM decode code in libpq/auth-scram.c\r\n>> +\t */\r\n>> +\tsalt_str_dec_len = pg_b64_dec_len(strlen(salt_str_enc));\r\n>> +\tsalt_str_dec = palloc(salt_str_dec_len);\r\n>> +\tsalt_str_dec_len = pg_b64_decode(salt_str_enc, strlen(salt_str_enc),\r\n>> +\t\t\t\t\t\t\t\tsalt_str_dec, salt_str_dec_len);\r\n>> +\tif (salt_str_dec_len < 0)\r\n>> +\t{\r\n>> +\t\tereport(ERROR,\r\n>> +\t\t\t\t(errcode(ERRCODE_DATA_EXCEPTION),\r\n>> +\t\t\t\t errmsg(\"invalid base64 encoded string\"),\r\n>> +\t\t\t\t errhint(\"Use the \\\"encode\\\" function to convert to valid base64 string.\")));\r\n>> +\t}\r\n> \r\n> Instead of going through all these machinations to base64-decode the\r\n> salt and tell the user off if they encoded it wrong, why not accept the\r\n> binary salt directly as a bytea?\r\n\r\nIf we did that, I think we'd have to offer both. Most users are going to \r\nbe manipulating the salt as a base64 string, both because of 1/ how we \r\nstore it within the SCRAM secret and 2/ it's convenient in many \r\nlanguages to work with base64 encoded binary data.\r\n\r\nSo in that case, we'd still have to go through the \"base64 machinations\".\r\n\r\nHowever, I'd be OK with allowing for users to specify a \"bytea\" salt in \r\naddition to a base64 one if that seems reasonable. I would be -1 for \r\nswapping the base64 salt for just a \"bytea\" one as I think that would \r\npresent usability challenges.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 31 Oct 2022 18:52:20 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 04:27:08PM -0400, Jonathan S. Katz wrote:\n> 1. password only -- this defers to the PG defaults for SCRAM\n> 2. password + salt -- this is useful for the password history / dictionary\n> case to allow for a predictable way to check a password.\n\nWell, one could pass a salt based on something generated by random()\nto emulate what we currently do in the default case, as well. The\nsalt length is an extra possibility, letting it be randomly generated\nby pg_strong_random().\n\n> 1. Location of the functions. I put them in\n> src/backend/utils/adt/cryptohashfuncs.c as I wasn't sure where it would make\n> sense to have them (and they could easily go into their own file).\n\nAs of adt/authfuncs.c? cryptohashfuncs.c does not strike me as a good\nfit.\n\n> Please let me know if you have any questions. I'll add a CF entry for this.\n\n+{ oid => '8555', descr => 'Build a SCRAM secret',\n+ proname => 'scram_build_secret_sha256', proleakproof => 't', prorettype => 'text',\n+ proargtypes => 'text', prosrc => 'scram_build_secret_sha256_from_password' },\n+{ oid => '8556', descr => 'Build a SCRAM secret',\n+ proname => 'scram_build_secret_sha256', proleakproof => 't',\n+ provolatile => 'i', prorettype => 'text',\n+ proargtypes => 'text text', prosrc => 'scram_build_secret_sha256_from_password_and_salt' },\n+{ oid => '8557', descr => 'Build a SCRAM secret',\n+ proname => 'scram_build_secret_sha256', proleakproof => 't',\n+ provolatile => 'i', prorettype => 'text',\n+ proargtypes => 'text text int4', prosrc => 'scram_build_secret_sha256_from_password_and_salt_and_iterations' },\n\nKeeping this approach as-is, I don't think that you should consume 3\nOIDs, but 1 (with scram_build_secret_sha256_from_password_and_..\nas prosrc) that has two defaults for the second argument (salt string,\ndefault as NULL) and third argument (salt, default at 0), with the\ndefaults set up in system_functions.sql via a redefinition.\n\nNote that you cannot pass down an expression for the password of\nCREATE/ALTER USER, meaning that this would need a \\gset at least if\ndone by a psql client for example, and passing down a password string\nis not an encouraged practice, either. Another approach is to also\nprovide a role OID in input and store the newly-computed password in\npg_authid (as of [1]), so as one can store it easily.\n\nDid you look at extending \\password? Being able to extend\nPQencryptPasswordConn() with custom parameters has been discussed in\nthe past, but this has gone nowhere. That's rather unrelated to what\nyou are looking for, just mentioning as we are playing with options to\nhave control the iteration number and the salt.\n\n[1]: https://github.com/michaelpq/pg_plugins/blob/main/scram_utils/scram_utils.c\n--\nMichael",
"msg_date": "Tue, 1 Nov 2022 09:56:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Mon, Oct 31, 2022 at 1:27 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Having a set of SCRAM secret building functions would help in a few areas:\n\nI have mixed-to-negative feelings about this. Orthogonality with other\nmethods seems reasonable, except we don't really recommend that people\nuse those other methods today. SCRAM is supposed to be one of the\nsolutions where the server does not know your password at any point\nand cannot impersonate you to others.\n\nIf we don't provide an easy client-side equivalent for the new\nfunctionality, via \\password or some such, then the path of least\nresistance for some of these intermediate use cases (i.e. higher\niteration count) will be \"just get used to sending your password in\nplaintext,\" and that doesn't really sound all that great. Similar to\npgcrypto's current state.\n\n> 2. Keeping a history file of user-stored passwords\n\nCould you expand on this? How does being able to generate SCRAM\nsecrets help you keep a password history?\n\n> or checking against a common-password dictionary.\n\nPeople really want to do this using SQL? Maybe my idea of the use case\nis way off, but I'm skeptical that this scales (safely and/or\nperformantly) to a production system, *especially* if you have your\niteration counts high enough.\n\n> 3. Allowing users to build SQL-functions that can precompute SCRAM\n> secrets on a local server before sending it to a remote server.\n\nI guess I have fewer problems with this use case in theory, but I'm\nwondering if better client-side support might also solve this one as\nwell, without the additional complication. Is there a reason it would\nnot?\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 1 Nov 2022 16:02:21 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 4:02 PM Jacob Champion <jchampion@timescale.com> wrote:\n> I guess I have fewer problems with this use case in theory, but I'm\n> wondering if better client-side support might also solve this one as\n> well, without the additional complication. Is there a reason it would\n> not?\n\nTo expand on this question, after giving it some more thought:\n\nIt seems to me that the use case here is extremely similar to the one\nbeing tackled by Peter E's client-side encryption [1]. People want to\nwrite SQL to perform a cryptographic operation using a secret, and\nthen send the resulting ciphertext (or in this case, a one-way hash)\nto the server, but ideally the server should not actually have the\nsecret.\n\nI don't think it's helpful for me to try to block progress on this\npatchset behind the other one. But is there a way for me to help this\nproposal skate in the same general direction? Could Peter's encryption\nframework expand to fit this case in the future?\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/89157929-c2b6-817b-6025-8e4b2d89d88f%40enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Nov 2022 13:39:47 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 04.11.22 21:39, Jacob Champion wrote:\n> It seems to me that the use case here is extremely similar to the one\n> being tackled by Peter E's client-side encryption [1]. People want to\n> write SQL to perform a cryptographic operation using a secret, and\n> then send the resulting ciphertext (or in this case, a one-way hash)\n> to the server, but ideally the server should not actually have the\n> secret.\n\nIt might be possible, but it's a bit of a reach. For instance, there \nare no keys and no decryption associated with this kind of operation.\n\n> I don't think it's helpful for me to try to block progress on this\n> patchset behind the other one. But is there a way for me to help this\n> proposal skate in the same general direction? Could Peter's encryption\n> framework expand to fit this case in the future?\n\nWe already have support in libpq for doing this (PQencryptPasswordConn()).\n\n\n",
"msg_date": "Tue, 8 Nov 2022 21:26:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 11/8/22 12:26, Peter Eisentraut wrote:\n> On 04.11.22 21:39, Jacob Champion wrote:\n>> I don't think it's helpful for me to try to block progress on this\n>> patchset behind the other one. But is there a way for me to help this\n>> proposal skate in the same general direction? Could Peter's encryption\n>> framework expand to fit this case in the future?\n> \n> We already have support in libpq for doing this (PQencryptPasswordConn()).\n\nSure, but you can't access that in SQL, right? The hand-wavy part is to\ncombine that existing function with your transparent encryption\nproposal, as a special-case encryptor whose output could be bound to the\nquery.\n\nBut I guess that wouldn't really help with ALTER ROLE ... PASSWORD,\nbecause you can't parameterize it. Hm...\n\n--Jacob\n\n\n",
"msg_date": "Tue, 8 Nov 2022 16:57:09 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Tue, Nov 08, 2022 at 04:57:09PM -0800, Jacob Champion wrote:\n> But I guess that wouldn't really help with ALTER ROLE ... PASSWORD,\n> because you can't parameterize it. Hm...\n\nYeah, and I'd like to think that this is never something we should\nallow, either, as that could be easily a footgun for users (?).\n--\nMichael",
"msg_date": "Wed, 9 Nov 2022 14:28:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 9:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Nov 08, 2022 at 04:57:09PM -0800, Jacob Champion wrote:\n> > But I guess that wouldn't really help with ALTER ROLE ... PASSWORD,\n> > because you can't parameterize it. Hm...\n>\n> Yeah, and I'd like to think that this is never something we should\n> allow, either, as that could be easily a footgun for users (?).\n\nWhat would make it unsafe? I don't know a lot about the tradeoffs for\nparameterizing queries.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 10 Nov 2022 13:07:42 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 10/31/22 8:56 PM, Michael Paquier wrote:\r\n> On Mon, Oct 31, 2022 at 04:27:08PM -0400, Jonathan S. Katz wrote:\r\n>> 1. password only -- this defers to the PG defaults for SCRAM\r\n>> 2. password + salt -- this is useful for the password history / dictionary\r\n>> case to allow for a predictable way to check a password.\r\n> \r\n> Well, one could pass a salt based on something generated by random()\r\n> to emulate what we currently do in the default case, as well. The\r\n> salt length is an extra possibility, letting it be randomly generated\r\n> by pg_strong_random().\r\n\r\nSure, this is a good point. From a SQL level we can get that from \r\npgcrypto \"gen_random_bytes\".\r\n\r\nPer this and ilmari's feedback, I updated the 2nd argument to be a \r\nbytea. See the corresponding tests that then show using decode(..., \r\n'base64') to handle this.\r\n\r\nWhen I write the docs, I'll include that in the examples.\r\n\r\n>> 1. Location of the functions. I put them in\r\n>> src/backend/utils/adt/cryptohashfuncs.c as I wasn't sure where it would make\r\n>> sense to have them (and they could easily go into their own file).\r\n> \r\n> As of adt/authfuncs.c? cryptohashfuncs.c does not strike me as a good\r\n> fit.\r\n\r\nI went with your suggested name.\r\n\r\n>> Please let me know if you have any questions. I'll add a CF entry for this.\r\n> \r\n> +{ oid => '8555', descr => 'Build a SCRAM secret',\r\n> + proname => 'scram_build_secret_sha256', proleakproof => 't', prorettype => 'text',\r\n> + proargtypes => 'text', prosrc => 'scram_build_secret_sha256_from_password' },\r\n> +{ oid => '8556', descr => 'Build a SCRAM secret',\r\n> + proname => 'scram_build_secret_sha256', proleakproof => 't',\r\n> + provolatile => 'i', prorettype => 'text',\r\n> + proargtypes => 'text text', prosrc => 'scram_build_secret_sha256_from_password_and_salt' },\r\n> +{ oid => '8557', descr => 'Build a SCRAM secret',\r\n> + proname => 'scram_build_secret_sha256', proleakproof => 't',\r\n> + provolatile => 'i', prorettype => 'text',\r\n> + proargtypes => 'text text int4', prosrc => 'scram_build_secret_sha256_from_password_and_salt_and_iterations' },\r\n> \r\n> Keeping this approach as-is, I don't think that you should consume 3\r\n> OIDs, but 1 (with scram_build_secret_sha256_from_password_and_..\r\n> as prosrc) that has two defaults for the second argument (salt string,\r\n> default as NULL) and third argument (salt, default at 0), with the\r\n> defaults set up in system_functions.sql via a redefinition.\r\n\r\nThanks for the suggestion. I went with this as well.\r\n\r\n> Note that you cannot pass down an expression for the password of\r\n> CREATE/ALTER USER, meaning that this would need a \\gset at least if\r\n> done by a psql client for example, and passing down a password string\r\n> is not an encouraged practice, either. Another approach is to also\r\n> provide a role OID in input and store the newly-computed password in\r\n> pg_authid (as of [1]), so as one can store it easily.\r\n\r\n...unless you dynamically generate the CREATE/ALTER ROLE command ;) (and \r\nyes, lots of discussion on that).\r\n\r\n> Did you look at extending \\password? Being able to extend\r\n> PQencryptPasswordConn() with custom parameters has been discussed in\r\n> the past, but this has gone nowhere. That's rather unrelated to what\r\n> you are looking for, just mentioning as we are playing with options to\r\n> have control the iteration number and the salt.\r\n\r\nNot yet, but happy to do that as a follow-up patch.\r\n\r\nPlease see version 2. If folks are generally happy with this, I'll \r\naddress any additional feedback and write up docs.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 10 Nov 2022 23:14:34 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 11:14:34PM -0500, Jonathan S. Katz wrote:\n> On 10/31/22 8:56 PM, Michael Paquier wrote:\n>> Well, one could pass a salt based on something generated by random()\n>> to emulate what we currently do in the default case, as well. The\n>> salt length is an extra possibility, letting it be randomly generated\n>> by pg_strong_random().\n> \n> Sure, this is a good point. From a SQL level we can get that from pgcrypto\n> \"gen_random_bytes\".\n\nCould it be something we could just push into core? FWIW, I've used\nthat quite a bit in the last to cheaply build long random strings of\ndata for other things. Without pgcrypto, random() with\ngenerate_series() has always been kind of.. fun.\n\n+SELECT scram_build_secret_sha256(NULL);\n+ERROR: password must not be null\n+SELECT scram_build_secret_sha256(NULL, NULL);\n+ERROR: password must not be null\n+SELECT scram_build_secret_sha256(NULL, NULL, NULL);\n+ERROR: password must not be null\n\nThis is just testing three times the same thing as per the defaults.\nI would cut the second and third cases.\n\ngit diff --check reports some whitespaces.\n\nscram_build_secret_sha256_internal() is missing SASLprep on the\npassword string. Perhaps the best thing to do here is just to extend\npg_be_scram_build_secret() with more arguments so as callers can\noptionally pass down a custom salt with its length, leaving the\nresponsibility to pg_be_scram_build_secret() to create a random salt\nif nothing has been given?\n--\nMichael",
"msg_date": "Thu, 17 Nov 2022 12:09:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 11/16/22 10:09 PM, Michael Paquier wrote:\r\n> On Thu, Nov 10, 2022 at 11:14:34PM -0500, Jonathan S. Katz wrote:\r\n>> On 10/31/22 8:56 PM, Michael Paquier wrote:\r\n>>> Well, one could pass a salt based on something generated by random()\r\n>>> to emulate what we currently do in the default case, as well. The\r\n>>> salt length is an extra possibility, letting it be randomly generated\r\n>>> by pg_strong_random().\r\n>>\r\n>> Sure, this is a good point. From a SQL level we can get that from pgcrypto\r\n>> \"gen_random_bytes\".\r\n> \r\n> Could it be something we could just push into core? FWIW, I've used\r\n> that quite a bit in the last to cheaply build long random strings of\r\n> data for other things. Without pgcrypto, random() with\r\n> generate_series() has always been kind of.. fun.\r\n\r\nI would be a +1 for moving that into core, given we did something \r\nsimilar with gen_random_uuid[1]. Separate patch, of course :)\r\n\r\n> +SELECT scram_build_secret_sha256(NULL);\r\n> +ERROR: password must not be null\r\n> +SELECT scram_build_secret_sha256(NULL, NULL);\r\n> +ERROR: password must not be null\r\n> +SELECT scram_build_secret_sha256(NULL, NULL, NULL);\r\n> +ERROR: password must not be null\r\n> \r\n> This is just testing three times the same thing as per the defaults.\r\n> I would cut the second and third cases.\r\n\r\nAFAICT it's not returning the defaults. Quick other example:\r\n\r\nCREATE FUNCTION ab (a int DEFAULT 0) RETURNS int AS $$ SELECT a; $$ \r\nLANGUAGE SQL;\r\n\r\nSELECT ab();\r\n ab\r\n----\r\n 0\r\n(1 row)\r\n\r\nSELECT ab(NULL::int);\r\n ab\r\n----\r\n\r\n(1 row)\r\n\r\nGiven scram_build_secret_sha256 is not a strict function, I'd prefer to \r\ntest all of the NULL cases in case anything in the underlying code \r\nchanges in the future. It's a cheap cost to be a bit more careful.\r\n\r\n> git diff --check reports some whitespaces.\r\n\r\nAck. Will fix on the next pass. (I've been transitioning editors, which \r\ncould have resulted in that),\r\n\r\n> scram_build_secret_sha256_internal() is missing SASLprep on the\r\n> password string. Perhaps the best thing to do here is just to extend\r\n> pg_be_scram_build_secret() with more arguments so as callers can\r\n> optionally pass down a custom salt with its length, leaving the\r\n> responsibility to pg_be_scram_build_secret() to create a random salt\r\n> if nothing has been given?\r\n\r\nAh, good catch!\r\n\r\nI think if we go with passing down the salt, we'd also have to allow for \r\nthe passing down of the iterations, too, and we're close to rebuilding \r\n\"scram_build_secret\". I'll stare a bit at this on the next pass and \r\neither 1/ just SASLprep the string in the new \r\n\"scram_build_secret_sha256_internal\" func, or 2/ change the definition \r\nof \"pg_be_scram_build_secret\" to accommodate more overrides.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/current/functions-uuid.html",
"msg_date": "Sat, 26 Nov 2022 14:53:44 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 11/26/22 2:53 PM, Jonathan S. Katz wrote:\r\n> On 11/16/22 10:09 PM, Michael Paquier wrote:\r\n\r\n>> git diff --check reports some whitespaces.\r\n> \r\n> Ack. Will fix on the next pass. (I've been transitioning editors, which \r\n> could have resulted in that),\r\n\r\nFixed (and have run that check subsequently).\r\n\r\n>> scram_build_secret_sha256_internal() is missing SASLprep on the\r\n>> password string. Perhaps the best thing to do here is just to extend\r\n>> pg_be_scram_build_secret() with more arguments so as callers can\r\n>> optionally pass down a custom salt with its length, leaving the\r\n>> responsibility to pg_be_scram_build_secret() to create a random salt\r\n>> if nothing has been given?\r\n> \r\n> Ah, good catch!\r\n> \r\n> I think if we go with passing down the salt, we'd also have to allow for \r\n> the passing down of the iterations, too, and we're close to rebuilding \r\n> \"scram_build_secret\". I'll stare a bit at this on the next pass and \r\n> either 1/ just SASLprep the string in the new \r\n> \"scram_build_secret_sha256_internal\" func, or 2/ change the definition \r\n> of \"pg_be_scram_build_secret\" to accommodate more overrides.\r\n\r\nIn the end I went with your suggested approach as it limited the amount \r\nof code duplication. I did keep in all the permutations of the tests as \r\nit did help me catch an error in my code that let to a panic.\r\n\r\nAs this seems to be closer to completion, I did include docs in this \r\npatch. I added this function as part of the \"string functions\" section \r\nof the docs as \"md5\" was already there. If we continue to add more \r\nauthentication helper functions, perhaps we should consider breaking \r\nthose out into their own documentation section.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sun, 27 Nov 2022 00:21:58 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "> On 27 Nov 2022, at 06:21, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 11/26/22 2:53 PM, Jonathan S. Katz wrote:\n>> On 11/16/22 10:09 PM, Michael Paquier wrote:\n> \n>>> git diff --check reports some whitespaces.\n>> Ack. Will fix on the next pass. (I've been transitioning editors, which could have resulted in that),\n> \n> Fixed (and have run that check subsequently).\n\nThe spaces-before-tabs that git is complaining about are gone but there are\nstill whitespace issues like scram_build_secret_sha256() which has a mix of\ntwo-space and tab indentation. I recommend taking it for a spin with pgindent.\n\nSorry for not noticing the thread earlier, below are some review comments and \n\n+ SCRAM secret equilvaent to what is stored in\ns/equilvaent/equivalent/\n\n+ <literal>SELECT scram_build_secret_sha256('secret password', '\\xabba5432');</literal>\n+ <returnvalue></returnvalue>\n+<programlisting>\n+ SCRAM-SHA-256$4096:q7pUMg==$05Nb9QHwHkMA0CRcYaEfwtgZ+3kStIefz8fLMjTEtio=:P126h1ycyP938E69yxktEfhoAILbiwL/UMsMk3Efb6o=\n+</programlisting>\nShouldn't the function output be inside <returnvalue></returnvalue>? IIRC the\nuse if <programlisting> with an empty <returnvalue> is for multiline output,\nbut I'm not 100% sure there.\n\n+\tif (iterations <= 0)\n+\t\titerations = SCRAM_DEFAULT_ITERATIONS;\nAccording to the RFC, the iteration-count \"SHOULD be at least 4096\", so we can\nreduce it, but do we gain anything by allowing users to set it lower? If so,\nscram_build_secret() is already converting (iterations <= 0) to the default so\nthere is no need to duplicate that logic.\n\nPersonally I'd prefer if we made 4096 the minimum and only allowed higher\nregardless of the fate of this patch, but thats for another thread.\n\n+ Assert(secret != NULL);\nI don't think there are any paths where this is possible to trigger, did you\nsee any?\n\n\nOn the whole I tend to agree with Jacob upthread, while this does provide\nconsistency it doesn't seem to move the needle for best practices. Allowing\nscram_build_secret_sha256('password', 'a', 1); with the password potentially\ngoing in cleartext over the wire and into the logs doesn't seem like a great\ntradeoff for the (IMHO) niche usecases it would satisfy.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 29 Nov 2022 21:32:34 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 09:32:34PM +0100, Daniel Gustafsson wrote:\n> On the whole I tend to agree with Jacob upthread, while this does provide\n> consistency it doesn't seem to move the needle for best practices. Allowing\n> scram_build_secret_sha256('password', 'a', 1); with the password potentially\n> going in cleartext over the wire and into the logs doesn't seem like a great\n> tradeoff for the (IMHO) niche usecases it would satisfy.\n\nShould we try to make \\password and libpq more flexible instead? Two\nthings got discussed in this area since v10:\n- The length of the random salt.\n- The iteration number.\n\nOr we could bump up the defaults, and come back to that in a few\nyears, again.. ;p\n--\nMichael",
"msg_date": "Wed, 30 Nov 2022 10:12:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 11/29/22 8:12 PM, Michael Paquier wrote:\r\n> On Tue, Nov 29, 2022 at 09:32:34PM +0100, Daniel Gustafsson wrote:\r\n>> On the whole I tend to agree with Jacob upthread, while this does provide\r\n>> consistency it doesn't seem to move the needle for best practices. Allowing\r\n>> scram_build_secret_sha256('password', 'a', 1); with the password potentially\r\n>> going in cleartext over the wire and into the logs doesn't seem like a great\r\n>> tradeoff for the (IMHO) niche usecases it would satisfy.\r\n> \r\n> Should we try to make \\password and libpq more flexible instead? Two\r\n> things got discussed in this area since v10:\r\n> - The length of the random salt.\r\n> - The iteration number.\r\n> \r\n> Or we could bump up the defaults, and come back to that in a few\r\n> years, again.. ;p\r\n\r\nHere is another attempt at this patch that takes into account the SCRAM \r\ncode refactor. I addressed some of Daniel's previous feedback, but will \r\nneed to make another pass on the docs and the assert trace as the main \r\nfocus of this revision was bringing the code inline with the recent changes.\r\n\r\nThis patch changes the function name to \"scram_build_secret\" and now \r\naccepts a new parameter of hash type. This sets it up to handle \r\nadditional hashes in the future.\r\n\r\nI do agree we should make libpq more flexible, but as mentioned in the \r\noriginal thread, this does not solve the *server side* cases where a \r\nuser needs to build a SCRAM secret. For example, being able to \r\nprecompute hashes on one server before sending them to another server, \r\nwhich can require no plaintext passwords if the server is randomly \r\ngenerating the data.\r\n\r\nAnother use case comes from the \"pg_tle\" project, specifically with the \r\nability to write a \"check_password_hook\" from an available PL[1]. If a \r\nuser does follow our best practices and sends a pre-built SCRAM secret \r\nover the wire, a hook can then verify that the secret is not contained \r\nwithin a common password dictionary.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://github.com/aws/pg_tle/blob/main/docs/04_hooks.md",
"msg_date": "Mon, 23 Jan 2023 00:58:40 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-23 00:58:40 -0500, Jonathan S. Katz wrote:\n> Here is another attempt at this patch that takes into account the SCRAM code\n> refactor. I addressed some of Daniel's previous feedback, but will need to\n> make another pass on the docs and the assert trace as the main focus of this\n> revision was bringing the code inline with the recent changes.\n\nThis reliably fails on CI:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3988\n\nI think this is related to encoding issues. The 32bit debian task\nintentionally uses LANG=C. Resulting in failures like:\nhttps://api.cirrus-ci.com/v1/artifact/task/6696410851049472/testrun/build-32/testrun/regress/regress/regression.diffs\n\nWindows fails with a similar issue:\nhttps://api.cirrus-ci.com/v1/artifact/task/5676064060473344/testrun/build/testrun/regress/regress/regression.diffs\n\nI've set the patch as waiting on author for now.\n\n- Andres\n\n\n",
"msg_date": "Tue, 14 Feb 2023 12:17:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 2/14/23 3:17 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2023-01-23 00:58:40 -0500, Jonathan S. Katz wrote:\r\n>> Here is another attempt at this patch that takes into account the SCRAM code\r\n>> refactor. I addressed some of Daniel's previous feedback, but will need to\r\n>> make another pass on the docs and the assert trace as the main focus of this\r\n>> revision was bringing the code inline with the recent changes.\r\n> \r\n> This reliably fails on CI:\r\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3988\r\n> \r\n> I think this is related to encoding issues. The 32bit debian task\r\n> intentionally uses LANG=C. Resulting in failures like:\r\n> https://api.cirrus-ci.com/v1/artifact/task/6696410851049472/testrun/build-32/testrun/regress/regress/regression.diffs\r\n> \r\n> Windows fails with a similar issue:\r\n> https://api.cirrus-ci.com/v1/artifact/task/5676064060473344/testrun/build/testrun/regress/regress/regression.diffs\r\n> \r\n> I've set the patch as waiting on author for now.\r\n\r\nThanks for the explanation. I'll work on fixing that in the next go round.\r\n\r\nJonathan",
"msg_date": "Tue, 14 Feb 2023 15:19:22 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 2/14/23 3:19 PM, Jonathan S. Katz wrote:\r\n> On 2/14/23 3:17 PM, Andres Freund wrote:\r\n\r\n>> This reliably fails on CI:\r\n>> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3988\r\n>>\r\n>> I think this is related to encoding issues. The 32bit debian task\r\n>> intentionally uses LANG=C. Resulting in failures like:\r\n>> https://api.cirrus-ci.com/v1/artifact/task/6696410851049472/testrun/build-32/testrun/regress/regress/regression.diffs\r\n>>\r\n>> Windows fails with a similar issue:\r\n>> https://api.cirrus-ci.com/v1/artifact/task/5676064060473344/testrun/build/testrun/regress/regress/regression.diffs\r\n> \r\n> Thanks for the explanation. I'll work on fixing that in the next go round.\r\n\r\n(First -- I really like the current status of running the tests with \r\nMeson. I'm finding it very easy to use -- doing the locale testing was \r\npretty easy too!)\r\n\r\nI stared at this for a bit to see what we do in other regression tests \r\nusing unicode strings. I looked at the regression tests for strings[1] \r\nand ICU collations[2].\r\n\r\nIn \"strings\", all the escaped Unicode strings are in the low bits so \r\nthey're convertible to ASCII.\r\n\r\nIn the ICU test, it does a check to see if we're using UTF-8: if we're \r\nnot, it bails.\r\n\r\nFor this patch, the value of the failing test is to ensure that the \r\nSCRAM function honors SASLprep when building the secret. It makes more \r\nsense to use the current character (U+1680), which will be converted to \r\na space by the algorithm, vs. moving to U+0020 or something that does \r\nnot exercise the SASLprep code.\r\n\r\nI opted for the approach in [2]. v5 contains the branching logic for the \r\nUTF8 only tests, and the corresponding output files. I tested locally on \r\nmacOS against both UTF8 + C locales.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/strings.sql\r\n[2] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/collate.icu.utf8.sql",
"msg_date": "Tue, 14 Feb 2023 18:16:18 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 06:16:18PM -0500, Jonathan S. Katz wrote:\n> I opted for the approach in [2]. v5 contains the branching logic for the\n> UTF8 only tests, and the corresponding output files. I tested locally on\n> macOS against both UTF8 + C locales.\n\nI was reading this thread again, and pondered on this particular\npoint:\nhttps://www.postgresql.org/message-id/CAAWbhmhjcFc4oaGA_7YLUhtj6J+rxEY+BoDryGzNdaFLGfZZMg@mail.gmail.com\n\nWe've had our share of complains over the years that Postgres logs\npassword data in the logs with various DDLs, so I'd tend to agree that\nthis is not a practice we should try to encourage more. The\nparameterization of the SCRAM verifiers through GUCs (like Daniel's\nhttps://commitfest.postgresql.org/42/4201/ for the iteration number)\nis more promising because it is possible to not have to send the\npassword over the wire with once we let libpq take care of the\ncomputation, and the server would not know about that.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 15:48:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On 3/22/23 2:48 AM, Michael Paquier wrote:\r\n> On Tue, Feb 14, 2023 at 06:16:18PM -0500, Jonathan S. Katz wrote:\r\n>> I opted for the approach in [2]. v5 contains the branching logic for the\r\n>> UTF8 only tests, and the corresponding output files. I tested locally on\r\n>> macOS against both UTF8 + C locales.\r\n> \r\n> I was reading this thread again, and pondered on this particular\r\n> point:\r\n> https://www.postgresql.org/message-id/CAAWbhmhjcFc4oaGA_7YLUhtj6J+rxEY+BoDryGzNdaFLGfZZMg@mail.gmail.com\r\n> \r\n> We've had our share of complains over the years that Postgres logs\r\n> password data in the logs with various DDLs, so I'd tend to agree that\r\n> this is not a practice we should try to encourage more. The\r\n> parameterization of the SCRAM verifiers through GUCs (like Daniel's\r\n> https://commitfest.postgresql.org/42/4201/ for the iteration number)\r\n> is more promising because it is possible to not have to send the\r\n> password over the wire with once we let libpq take care of the\r\n> computation, and the server would not know about that\r\n\r\nI generally agree with not allowing password data to be in logs, but in \r\npractice, there are a variety of tools and extensions that obfuscate or \r\nremove passwords from PostgreSQL logs. Additionally, this function is \r\nnot targeted for SQL statements directly, but stored procedures.\r\n\r\nFor example, an extension like \"pg_tle\" exposes the ability for someone \r\nto write a \"check_password_hook\" directly from PL/pgSQL[1] (and other \r\nlanguages). As we've made it a best practice to pre-hash the password on \r\nthe client-side, a user who wants to write a check password hook against \r\na SCRAM verifier needs to be able to compare the verifier against some \r\nexisting set of plaintext criteria, and has to write their own function \r\nto do it. I have heard several users who have asked to do this, and the \r\nonly feedback I can give them is \"implement your own SCRAM build secret \r\nfunction.\"\r\n\r\nAnd, if my PostgreSQL server _is_ the client, e.g. it's making a dblink \r\ncall to another PostgreSQL server, the only way it can modify a password \r\nis by sending the plaintext credential over the wire.\r\n\r\nI don't see how the parameterization work applies here -- would we allow \r\nsalts to be parameterized? -- and it still would not allow the server to \r\nbuild out a SCRAM secret for these cases.\r\n\r\nMaybe I'm not conveying the problem this is solving -- I'm happy to go \r\none more round trying to make it clearer -- but if this is not clear, \r\nit'd be good to at develop an alternative approach to this before \r\nwithdrawing the patch.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://github.com/aws/pg_tle/blob/main/docs/06_plpgsql_examples.md#example-password-check-hook-against-bad-password-dictionary",
"msg_date": "Wed, 22 Mar 2023 10:06:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "> On 22 Mar 2023, at 15:06, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 3/22/23 2:48 AM, Michael Paquier wrote:\n\n>> I was reading this thread again, and pondered on this particular\n>> point:\n>> https://www.postgresql.org/message-id/CAAWbhmhjcFc4oaGA_7YLUhtj6J+rxEY+BoDryGzNdaFLGfZZMg@mail.gmail.com\n>> We've had our share of complains over the years that Postgres logs\n>> password data in the logs with various DDLs, so I'd tend to agree that\n>> this is not a practice we should try to encourage more. The\n>> parameterization of the SCRAM verifiers through GUCs (like Daniel's\n>> https://commitfest.postgresql.org/42/4201/ for the iteration number)\n>> is more promising because it is possible to not have to send the\n>> password over the wire with once we let libpq take care of the\n>> computation, and the server would not know about that\n> \n> I generally agree with not allowing password data to be in logs, but in practice, there are a variety of tools and extensions that obfuscate or remove passwords from PostgreSQL logs. Additionally, this function is not targeted for SQL statements directly, but stored procedures.\n> \n> For example, an extension like \"pg_tle\" exposes the ability for someone to write a \"check_password_hook\" directly from PL/pgSQL[1] (and other languages). As we've made it a best practice to pre-hash the password on the client-side, a user who wants to write a check password hook against a SCRAM verifier needs to be able to compare the verifier against some existing set of plaintext criteria, and has to write their own function to do it. I have heard several users who have asked to do this, and the only feedback I can give them is \"implement your own SCRAM build secret function.\"\n\nI'm not sure I follow; doesn't this - coupled with this patch - imply passing\nthe plaintext password from client to the server, hashing it with a known salt\nand comparing this with something in plaintext hashed with the same known salt?\nIf so, that's admittedly not a usecase I am terribly excited about. My\npreference is to bring password checks to the plaintext password, not bring the\nplaintext password to the password check.\n\n> And, if my PostgreSQL server _is_ the client, e.g. it's making a dblink call to another PostgreSQL server, the only way it can modify a password is by sending the plaintext credential over the wire.\n\nMy experience with dblink setups is too small to have much insight here, but I\n(perhaps naively) assumed that dblink setups generally involved two servers\nunder the same control making such changes be possible out of band.\n\n> Maybe I'm not conveying the problem this is solving -- I'm happy to go one more round trying to make it clearer -- but if this is not clear, it'd be good to at develop an alternative approach to this before withdrawing the patch.\n\nIf this is mainly targeting extension use, is there a way in which an extension\ncould have all this functionality with no: a) not exposing any of it from the\nserver; b) not having to copy/paste lots into the extension and; c) have a\nreasonable way to keep up with any changes made in the backend?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 16:47:20 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 4:48 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 22 Mar 2023, at 15:06, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> > On 3/22/23 2:48 AM, Michael Paquier wrote:\n>\n> >> I was reading this thread again, and pondered on this particular\n> >> point:\n> >> https://www.postgresql.org/message-id/CAAWbhmhjcFc4oaGA_7YLUhtj6J+rxEY+BoDryGzNdaFLGfZZMg@mail.gmail.com\n> >> We've had our share of complains over the years that Postgres logs\n> >> password data in the logs with various DDLs, so I'd tend to agree that\n> >> this is not a practice we should try to encourage more. The\n> >> parameterization of the SCRAM verifiers through GUCs (like Daniel's\n> >> https://commitfest.postgresql.org/42/4201/ for the iteration number)\n> >> is more promising because it is possible to not have to send the\n> >> password over the wire with once we let libpq take care of the\n> >> computation, and the server would not know about that\n> >\n> > I generally agree with not allowing password data to be in logs, but in practice, there are a variety of tools and extensions that obfuscate or remove passwords from PostgreSQL logs. Additionally, this function is not targeted for SQL statements directly, but stored procedures.\n> >\n> > For example, an extension like \"pg_tle\" exposes the ability for someone to write a \"check_password_hook\" directly from PL/pgSQL[1] (and other languages). As we've made it a best practice to pre-hash the password on the client-side, a user who wants to write a check password hook against a SCRAM verifier needs to be able to compare the verifier against some existing set of plaintext criteria, and has to write their own function to do it. I have heard several users who have asked to do this, and the only feedback I can give them is \"implement your own SCRAM build secret function.\"\n>\n> I'm not sure I follow; doesn't this - coupled with this patch - imply passing\n> the plaintext password from client to the server, hashing it with a known salt\n> and comparing this with something in plaintext hashed with the same known salt?\n> If so, that's admittedly not a usecase I am terribly excited about. My\n> preference is to bring password checks to the plaintext password, not bring the\n> plaintext password to the password check.\n\nGiven how much we marketed, for good reasons, SCRAM as a way to\nfinally make postgres *not* do this, it seems like a really strange\npath to take to go back to doing it again.\n\nHaving the function always generate a random salt seems more\nreasonable though, and would perhaps be something that helps in some\nof the cases? It won't help with the password policy one, as it's too\nsecure for that, but it would help with the postgres-is-the-client\none?\n\n\n> > And, if my PostgreSQL server _is_ the client, e.g. it's making a dblink call to another PostgreSQL server, the only way it can modify a password is by sending the plaintext credential over the wire.\n>\n> My experience with dblink setups is too small to have much insight here, but I\n> (perhaps naively) assumed that dblink setups generally involved two servers\n> under the same control making such changes be possible out of band.\n\nI have seen a few, and certainly more FDW based ones these days. But\nI'm not sure I've come across one yet where one wants to *change the\npassword* through dblink. Since it's server-to-server, most people\nwould just change the password on the target server and then update\nthe FDW/dblink configuration with the new password. (Of course, the\nstorage of that password on the FDW/dblink layer is a terrible thing\nin the first place from a security perspective, but it's what we\nhave).\n\n\n> > Maybe I'm not conveying the problem this is solving -- I'm happy to go one more round trying to make it clearer -- but if this is not clear, it'd be good to at develop an alternative approach to this before withdrawing the patch.\n>\n> If this is mainly targeting extension use, is there a way in which an extension\n> could have all this functionality with no: a) not exposing any of it from the\n> server; b) not having to copy/paste lots into the extension and; c) have a\n> reasonable way to keep up with any changes made in the backend?\n\nI realize I forgot to write this reply back when the thread was alive,\nso here's a zombie-awakener!\n\nOne way to accomplish this would be to create a new predefined role,\nsay pg_change_own_password, by default granted to public. When a user\nis a member of this role they can, well, change their own password.\nAnd it will be done using the full security of scram, without cutting\nanything. For those that want to enforce a password policy or anything\nelse that requires the server to see the cleartext or\ncleartext-equivalent of the password can then revoke this role from\npublic, and force password changes to go through a security definer\nfunciton, like SELECT pg_change_password_with_policy('joe',\n'mysupersecretpasswrod').\n\nThis function can then be under the control of an extension or\nwhatever you want. If one had the client under control one could for\nexample do the policy validation on the client and pass it to the\nbackend with some internal hash as well -- this would be entirely\nunder the control of the application though, as a generic libpq\nconnection could and should not be considered \"client under control\"\nin this case.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 11 Apr 2023 11:27:17 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 11:27:17AM +0200, Magnus Hagander wrote:\n> Having the function always generate a random salt seems more\n> reasonable though, and would perhaps be something that helps in some\n> of the cases? It won't help with the password policy one, as it's too\n> secure for that, but it would help with the postgres-is-the-client\n> one?\n\nWhile this is still hot.. Would it make sense to have a\nscram_salt_length GUC to control the length of the salt used when\ngenerating the SCRAM secret?\n--\nMichael",
"msg_date": "Fri, 14 Apr 2023 08:14:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "> On 14 Apr 2023, at 01:14, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Apr 11, 2023 at 11:27:17AM +0200, Magnus Hagander wrote:\n>> Having the function always generate a random salt seems more\n>> reasonable though, and would perhaps be something that helps in some\n>> of the cases? It won't help with the password policy one, as it's too\n>> secure for that, but it would help with the postgres-is-the-client\n>> one?\n> \n> While this is still hot.. Would it make sense to have a\n> scram_salt_length GUC to control the length of the salt used when\n> generating the SCRAM secret?\n\nWhat would be the intended usecase? I don’t have the RFC handy, does it say anything about salt length?\n\n./daniel\n\n",
"msg_date": "Fri, 14 Apr 2023 01:27:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 01:27:46AM +0200, Daniel Gustafsson wrote:\n> What would be the intended usecase? I don’t have the RFC handy, does\n> it say anything about salt length?\n\nHmm. I thought it did, but RFC 5802 has only these two paragraphs:\n\n If the authentication information is stolen from the authentication\n database, then an offline dictionary or brute-force attack can be\n used to recover the user's password. The use of salt mitigates this\n attack somewhat by requiring a separate attack on each password.\n Authentication mechanisms that protect against this attack are\n available (e.g., the EKE class of mechanisms). RFC 2945 [RFC2945] is\n an example of such technology. The WG elected not to use EKE like\n mechanisms as a basis for SCRAM.\n\n If an attacker obtains the authentication information from the\n authentication repository and either eavesdrops on one authentication\n exchange or impersonates a server, the attacker gains the ability to\n impersonate that user to all servers providing SCRAM access using the\n same hash function, password, iteration count, and salt. For this\n reason, it is important to use randomly generated salt values.\n--\nMichael",
"msg_date": "Fri, 14 Apr 2023 12:50:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "> On 14 Apr 2023, at 05:50, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Apr 14, 2023 at 01:27:46AM +0200, Daniel Gustafsson wrote:\n>> What would be the intended usecase? I don’t have the RFC handy, does\n>> it say anything about salt length?\n> \n> Hmm. I thought it did, but RFC 5802 has only these two paragraphs:\n> \n> If the authentication information is stolen from the authentication\n> database, then an offline dictionary or brute-force attack can be\n> used to recover the user's password. The use of salt mitigates this\n> attack somewhat by requiring a separate attack on each password.\n> Authentication mechanisms that protect against this attack are\n> available (e.g., the EKE class of mechanisms). RFC 2945 [RFC2945] is\n> an example of such technology. The WG elected not to use EKE like\n> mechanisms as a basis for SCRAM.\n> \n> If an attacker obtains the authentication information from the\n> authentication repository and either eavesdrops on one authentication\n> exchange or impersonates a server, the attacker gains the ability to\n> impersonate that user to all servers providing SCRAM access using the\n> same hash function, password, iteration count, and salt. For this\n> reason, it is important to use randomly generated salt values.\n\nThe salt needs to be unique, unpredictable and shall not repeat across password\ngeneration. The current 16 byte salted with pg_strong_random should provide\nthat and I'm not sure I see a usecase for allowing users to configure that.\nThe iteration count has a direct effect with the security/speed tradeoff but\nchanging the salt can basically only lead to lowering the security while not\ngaining efficiency, or am I missing something?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 14 Apr 2023 10:12:51 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 9:06 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> Maybe I'm not conveying the problem this is solving -- I'm happy to go\n> one more round trying to make it clearer -- but if this is not clear,\n> it'd be good to at develop an alternative approach to this before\n> withdrawing the patch.\n\nThis thread had some lively discussion, but it doesn't seem to have\nconverged towards consensus, and hasn't had activity since April. That\nbeing the case, maybe it's time to withdraw and reconsider the\napproach later?\n\n\n",
"msg_date": "Sat, 2 Dec 2023 13:51:52 +0700",
"msg_from": "John Naylor <johncnaylorls@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
},
{
"msg_contents": "On Sat, 2 Dec 2023 at 12:22, John Naylor <johncnaylorls@gmail.com> wrote:\n>\n> On Wed, Mar 22, 2023 at 9:06 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> >\n> > Maybe I'm not conveying the problem this is solving -- I'm happy to go\n> > one more round trying to make it clearer -- but if this is not clear,\n> > it'd be good to at develop an alternative approach to this before\n> > withdrawing the patch.\n>\n> This thread had some lively discussion, but it doesn't seem to have\n> converged towards consensus, and hasn't had activity since April. That\n> being the case, maybe it's time to withdraw and reconsider the\n> approach later?\n\nI have changed the status of this commitfest entry to \"Returned with\nFeedback\" as currently nobody pursued the discussion to get a\nconclusion. Feel free to discuss more on this and once it reaches a\nbetter shape, add a new entry for this to take it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 09:05:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: User functions for building SCRAM secrets"
}
] |
[
{
"msg_contents": "Hi,\nI was reading examine_variable in src/backend/utils/adt/selfuncs.c\n\nIt seems we already have the rte coming out of the loop which starts on\nline 5181.\n\nHere is a patch which reuses the return value from `planner_rt_fetch`.\n\nPlease take a look.\n\nThanks",
"msg_date": "Mon, 31 Oct 2022 16:49:08 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Reusing return value from planner_rt_fetch"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was reading examine_variable in src/backend/utils/adt/selfuncs.c\n> It seems we already have the rte coming out of the loop which starts on\n> line 5181.\n> Here is a patch which reuses the return value from `planner_rt_fetch`.\n\nplanner_rt_fetch is not so expensive that we should contort the code\nto avoid one call. So I'm not sure this is an improvement. It\ndoesn't seem to be more readable, and it adds assumptions on\nwhether appinfo is initially null or becomes so later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 Oct 2022 21:47:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reusing return value from planner_rt_fetch"
}
] |
[
{
"msg_contents": "Hi,\n\nHere is a patch to allow PostgreSQL to use $SUBJECT. It is from the\nAIO patch-set[1]. It adds three new settings, defaulting to off:\n\n io_data_direct = whether to use O_DIRECT for main data files\n io_wal_direct = ... for WAL\n io_wal_init_direct = ... for WAL-file initialisation\n\nO_DIRECT asks the kernel to avoid caching file data as much as\npossible. Here's a fun quote about it[2]:\n\n\"The exact semantics of Direct I/O (O_DIRECT) are not well specified.\nIt is not a part of POSIX, or SUS, or any other formal standards\nspecification. The exact meaning of O_DIRECT has historically been\nnegotiated in non-public discussions between powerful enterprise\ndatabase companies and proprietary Unix systems, and its behaviour has\ngenerally been passed down as oral lore rather than as a formal set of\nrequirements and specifications.\"\n\nIt gives the kernel the opportunity to move data directly between\nPostgreSQL's user space buffers and the storage hardware using DMA\nhardware, that is, without CPU involvement or copying. Not all\nstorage stacks can do that, for various reasons, but even if not, the\ncaching policy should ideally still use temporary buffers and avoid\npolluting the page cache.\n\nThese settings currently destroy performance, and are not intended to\nbe used by end-users, yet! That's why we filed them under\nDEVELOPER_OPTIONS. You don't get automatic read-ahead, concurrency,\nclustering or (of course) buffering from the kernel. The idea is that\nlater parts of the AIO patch-set will introduce mechanisms to replace\nwhat the kernel is doing for us today, and then more, since we ought\nto be even better at predicting our own future I/O than it, so that\nwe'll finish up ahead. Even with all that, you wouldn't want to turn\nit on by default because the default shared_buffers would be\ninsufficient for any real system, and there are portability problems.\n\nExamples of slowness:\n\n* every 8KB sequential read or write becomes a full round trip to the\nstorage, one at a time\n\n* data that is written to WAL and then read back in by WAL sender will\nincur full I/O round trip (that's probably not really an AIO problem,\nthat's something we should probably address by using shared memory\ninstead of files, as noted as a TODO item in the source code)\n\nMemory alignment patches:\n\nDirect I/O generally needs to be done to/from VM page-aligned\naddresses, but only \"standard\" 4KB pages, even when larger VM pages\nare in use (if there is an exotic system where that isn't true, it\nwon't work). We need to deal with buffers on the stack, the heap and\nin shmem. For the stack, see patch 0001. For the heap and shared\nmemory, see patch 0002, but David Rowley is going to propose that part\nseparately, as MemoryContext API adjustments are a specialised enough\ntopic to deserve another thread; here I include a copy as a\ndependency. The main direct I/O patch is 0003.\n\nAssorted portability notes:\n\nI expect this to \"work\" (that is, successfully destroy performance) on\ntypical developer systems running at least Linux, macOS, Windows and\nFreeBSD. By work, I mean: not be rejected by PostgreSQL, not be\nrejected by the kernel, and influence kernel cache behaviour on common\nfilesystems. It might be rejected with ENOSUPP, EINVAL etc on some\nmore exotic filesystems and OSes. Of currently supported OSes, only\nOpenBSD and Solaris don't have O_DIRECT at all, and we'll reject the\nGUCs. For macOS and Windows we internally translate our own\nPG_O_DIRECT flag to the correct flags/calls (committed a while\nback[3]).\n\nOn Windows, scatter/gather is available only with direct I/O, so a\ntrue pwritev would in theory be possible, but that has some more\ncomplications and is left for later patches (probably using native\ninterfaces, not disguising as POSIX).\n\nThere may be systems on which 8KB offset alignment will not work at\nall or not work well, and that's expected. For example, BTRFS, ZFS,\nJFS \"big file\", UFS etc allow larger-than-8KB blocks/records, and an\n8KB write will have to trigger a read-before-write. Note that\noffset/length alignment requirements (blocks) are independent of\nbuffer alignment requirements (memory pages, 4KB).\n\nThe behaviour and cache coherency of files that have open descriptors\nusing both direct and non-direct flags may be complicated and vary\nbetween systems. The patch currently lets you change the GUCs at\nruntime so backends can disagree: that should probably not be allowed,\nbut is like that now for experimentation. More study is required.\n\nIf someone has a compiler that we don't know how to do\npg_attribute_aligned() for, then we can't make correctly aligned stack\nbuffers, so in that case direct I/O is disabled, but I don't know of\nsuch a system (maybe aCC, but we dropped it). That's why smgr code\ncan only assert that pointers are IO-aligned if PG_O_DIRECT != 0, and\nwhy PG_O_DIRECT is forced to 0 if there is no pg_attribute_aligned()\nmacro, disabling the GUCs.\n\nThis seems to be an independent enough piece to get into the tree on\nits own, with the proviso that it's not actually useful yet other than\nfor experimentation. Thoughts?\n\nThese patches have been hacked on at various times by Andres Freund,\nDavid Rowley and me.\n\n[1] https://wiki.postgresql.org/wiki/AIO\n[2] https://ext4.wiki.kernel.org/index.php/Clarifying_Direct_IO%27s_Semantics\n[3] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BADiyyHe0cun2wfT%2BSVnFVqNYPxoO6J9zcZkVO7%2BNGig%40mail.gmail.com",
"msg_date": "Tue, 1 Nov 2022 20:36:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Direct I/O"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 08:36:18PM +1300, Thomas Munro wrote:\n> Hi,\n> \n> Here is a patch to allow PostgreSQL to use $SUBJECT. It is from the\n> AIO patch-set[1]. It adds three new settings, defaulting to off:\n> \n> io_data_direct = whether to use O_DIRECT for main data files\n> io_wal_direct = ... for WAL\n> io_wal_init_direct = ... for WAL-file initialisation\n\nYou added 3 booleans, but I wonder if it's better to add a string GUC\nwhich is parsed for comma separated strings. (By \"better\", I mean\nreducing the number of new GUCs - which is less important for developer\nGUCs anyway.)\n\nDIO is slower, but not so much that it can't run under CI. I suggest to\nadd an 099 commit to enable the feature during development.\n\nNote that this fails under linux with fsanitize=align:\n../src/backend/storage/file/buffile.c:117:17: runtime error: member access within misaligned address 0x561a4a8e40f8 for type 'struct BufFile', which requires 4096 byte alignment\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 1 Nov 2022 08:33:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 2:33 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Nov 01, 2022 at 08:36:18PM +1300, Thomas Munro wrote:\n> > io_data_direct = whether to use O_DIRECT for main data files\n> > io_wal_direct = ... for WAL\n> > io_wal_init_direct = ... for WAL-file initialisation\n>\n> You added 3 booleans, but I wonder if it's better to add a string GUC\n> which is parsed for comma separated strings. (By \"better\", I mean\n> reducing the number of new GUCs - which is less important for developer\n> GUCs anyway.)\n\nInteresting idea. So \"direct_io = data, wal, wal_init\", or maybe that\nshould be spelled io_direct. (\"Direct I/O\" is a common term of art,\nbut we also have some more io_XXX GUCs in later patches, so it's hard\nto choose...)\n\n> DIO is slower, but not so much that it can't run under CI. I suggest to\n> add an 099 commit to enable the feature during development.\n\nGood idea, will do.\n\n> Note that this fails under linux with fsanitize=align:\n> ../src/backend/storage/file/buffile.c:117:17: runtime error: member access within misaligned address 0x561a4a8e40f8 for type 'struct BufFile', which requires 4096 byte alignment\n\nOh, so BufFile is palloc'd and contains one of these. BufFile is not\neven using direct I/O, but by these rules it would need to be\npalloc_io_align'd. I will think about what to do about that...\n\n\n",
"msg_date": "Wed, 2 Nov 2022 09:44:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 09:44:30 +1300, Thomas Munro wrote:\n> On Wed, Nov 2, 2022 at 2:33 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Tue, Nov 01, 2022 at 08:36:18PM +1300, Thomas Munro wrote:\n> > > io_data_direct = whether to use O_DIRECT for main data files\n> > > io_wal_direct = ... for WAL\n> > > io_wal_init_direct = ... for WAL-file initialisation\n> >\n> > You added 3 booleans, but I wonder if it's better to add a string GUC\n> > which is parsed for comma separated strings.\n\nIn the past more complicated GUCs have not been well received, but it does\nseem like a nice way to reduce the amount of redundant stuff.\n\nPerhaps we could use the guc assignment hook to transform the input value into\na bitmask?\n\n\n> > (By \"better\", I mean reducing the number of new GUCs - which is less\n> > important for developer GUCs anyway.)\n\nFWIW, if / once we get to actual AIO, at least some of these would stop being\ndeveloper-only GUCs. There's substantial performance benefits in using DIO\nwith AIO. Buffered IO requires the CPU to copy the data from the userspace\ninto the kernelspace. But DIO can use DMA for that, freeing the CPU to do more\nuseful work. Buffered IO tops out much much earlier than AIO + DIO, and\nunfortunately tops out at much lower speeds on server CPUs.\n\n\n> > DIO is slower, but not so much that it can't run under CI. I suggest to\n> > add an 099 commit to enable the feature during development.\n> \n> Good idea, will do.\n\nMight be worth to additionally have a short tap test that does some basic\nstuff with DIO and leave that enabled? I think it'd be good to have\ncheck-world exercise DIO on dev machines, to reduce the likelihood of finding\nproblems only in CI, which is somewhat painful.\n\n\n> > Note that this fails under linux with fsanitize=align:\n> > ../src/backend/storage/file/buffile.c:117:17: runtime error: member access within misaligned address 0x561a4a8e40f8 for type 'struct BufFile', which requires 4096 byte alignment\n> \n> Oh, so BufFile is palloc'd and contains one of these. BufFile is not\n> even using direct I/O, but by these rules it would need to be\n> palloc_io_align'd. I will think about what to do about that...\n\nIt might be worth having two different versions of the struct, so we don't\nimpose unnecessarily high alignment everywhere?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 15:54:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 15:54:02 -0700, Andres Freund wrote:\n> On 2022-11-02 09:44:30 +1300, Thomas Munro wrote:\n> > Oh, so BufFile is palloc'd and contains one of these. BufFile is not\n> > even using direct I/O, but by these rules it would need to be\n> > palloc_io_align'd. I will think about what to do about that...\n>\n> It might be worth having two different versions of the struct, so we don't\n> impose unnecessarily high alignment everywhere?\n\nAlthough it might actually be worth aligning fully everywhere - there's a\nnoticable performance difference for buffered read IO.\n\nI benchmarked this on my workstation and laptop.\n\nI mmap'ed a buffer with 2 MiB alignment, MAP_ANONYMOUS | MAP_HUGETLB, and then\nmeasured performance of reading 8192 bytes into the buffer at different\noffsets. Each time I copied 16GiB in total. Within a program invocation I\nbenchmarked each offset 4 times, threw away the worst measurement, and\naveraged the rest. Then used the best of three program invocations.\n\nworkstation with dual xeon Gold 5215:\n\n turbo on turbo off\noffset GiB/s GiB/s\n0 18.358 13.528\n8 15.361 11.472\n9 15.330 11.418\n32 17.583 13.097\n512 17.707 13.229\n513 15.890 11.852\n4096 18.176 13.568\n8192 18.088 13.566\n2Mib 18.658 13.496\n\n\nlaptop with i9-9880H:\n\n turbo on turbo off\noffset GiB/s GiB/s\n0 33.589 17.160\n8 28.045 14.301\n9 27.582 14.318\n32 31.797 16.711\n512 32.215 16.810\n513 28.864 14.932\n4096 32.503 17.266\n8192 32.871 17.277\n2Mib 32.657 17.262\n\n\nSeems pretty clear that using 4096 byte alignment is worth it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 17:21:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 11/1/22 2:36 AM, Thomas Munro wrote:\n\n> Hi,\n>\n> Here is a patch to allow PostgreSQL to use $SUBJECT. It is from the\n\nThis is exciting to see! There's two other items to add to the TODO list \nbefore this would be ready for production:\n\n1) work_mem. This is a significant impediment to scaling shared buffers \nthe way you'd want to.\n\n2) Clock sweep. Specifically, currently the only thing that drives \nusage_count is individual backends running the clock hand. On large \nsystems with 75% of memory going to shared_buffers, that becomes a very \nsignificant problem, especially when the backend running the clock sweep \nis doing so in order to perform an operation like a b-tree page split. I \nsuspect it shouldn't be too hard to deal with this issue by just having \nbgwriter or another bgworker proactively ensuring some reasonable number \nof buffers with usage_count=0 exist.\n\n\nOne other thing to be aware of: overflowing as SLRU becomes a massive \nproblem if there isn't a filesystem backing the SLRU. Obviously only an \nissue if you try and apply DIO to SLRU files.\n\n\n\n",
"msg_date": "Fri, 4 Nov 2022 14:47:31 -0500",
"msg_from": "Jim Nasby <nasbyj@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-04 14:47:31 -0500, Jim Nasby wrote:\n> On 11/1/22 2:36 AM, Thomas Munro wrote:\n> > Here is a patch to allow PostgreSQL to use $SUBJECT. It is from the\n> \n> This is exciting to see! There's two other items to add to the TODO list\n> before this would be ready for production:\n> \n> 1) work_mem. This is a significant impediment to scaling shared buffers the\n> way you'd want to.\n\nI don't really think that's closely enough related to tackle together. Yes,\nit'd be easier to set a large s_b if we had better work_mem management, but\nit's a completely distinct problem, and in a lot of cases you could use DIO\nwithout tackling the work_mem issue.\n\n\n> 2) Clock sweep. Specifically, currently the only thing that drives\n> usage_count is individual backends running the clock hand. On large systems\n> with 75% of memory going to shared_buffers, that becomes a very significant\n> problem, especially when the backend running the clock sweep is doing so in\n> order to perform an operation like a b-tree page split. I suspect it\n> shouldn't be too hard to deal with this issue by just having bgwriter or\n> another bgworker proactively ensuring some reasonable number of buffers with\n> usage_count=0 exist.\n\nI agree this isn't great, but I don't think the replacement efficiency is that\nbig a problem. Replacing the wrong buffers is a bigger issue.\n\nI've run tests with s_b=768GB (IIRC) without it showing up as a major\nissue. If you have an extreme replacement rate at such a large s_b you have a\nlot of other problems.\n\nI don't want to discourage anybody from tackling the clock replacement issues,\nthe contrary, but AIO+DIO can show significant wins without those\nchanges. It's already a humongous project...\n\n\n> One other thing to be aware of: overflowing as SLRU becomes a massive\n> problem if there isn't a filesystem backing the SLRU. Obviously only an\n> issue if you try and apply DIO to SLRU files.\n\nWhich would be a very bad idea for now.... Thomas does have a patch for moving\nthem into the main buffer pool.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 17:38:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 2:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Memory alignment patches:\n>\n> Direct I/O generally needs to be done to/from VM page-aligned\n> addresses, but only \"standard\" 4KB pages, even when larger VM pages\n> are in use (if there is an exotic system where that isn't true, it\n> won't work). We need to deal with buffers on the stack, the heap and\n> in shmem. For the stack, see patch 0001. For the heap and shared\n> memory, see patch 0002, but David Rowley is going to propose that part\n> separately, as MemoryContext API adjustments are a specialised enough\n> topic to deserve another thread; here I include a copy as a\n> dependency. The main direct I/O patch is 0003.\n\nOne thing to note: Currently, a request to aset above 8kB must go into a\ndedicated block. Not sure if it's a coincidence that that matches the\ndefault PG page size, but if allocating pages on the heap is hot enough,\nmaybe we should consider raising that limit. Although then, aligned-to-4kB\nrequests would result in 16kB chunks requested unless a different allocator\nwas used.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Nov 1, 2022 at 2:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:> Memory alignment patches:>> Direct I/O generally needs to be done to/from VM page-aligned> addresses, but only \"standard\" 4KB pages, even when larger VM pages> are in use (if there is an exotic system where that isn't true, it> won't work). We need to deal with buffers on the stack, the heap and> in shmem. For the stack, see patch 0001. For the heap and shared> memory, see patch 0002, but David Rowley is going to propose that part> separately, as MemoryContext API adjustments are a specialised enough> topic to deserve another thread; here I include a copy as a> dependency. The main direct I/O patch is 0003.One thing to note: Currently, a request to aset above 8kB must go into a dedicated block. Not sure if it's a coincidence that that matches the default PG page size, but if allocating pages on the heap is hot enough, maybe we should consider raising that limit. Although then, aligned-to-4kB requests would result in 16kB chunks requested unless a different allocator was used.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 10 Nov 2022 14:26:20 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-10 14:26:20 +0700, John Naylor wrote:\n> On Tue, Nov 1, 2022 at 2:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> > Memory alignment patches:\n> >\n> > Direct I/O generally needs to be done to/from VM page-aligned\n> > addresses, but only \"standard\" 4KB pages, even when larger VM pages\n> > are in use (if there is an exotic system where that isn't true, it\n> > won't work). We need to deal with buffers on the stack, the heap and\n> > in shmem. For the stack, see patch 0001. For the heap and shared\n> > memory, see patch 0002, but David Rowley is going to propose that part\n> > separately, as MemoryContext API adjustments are a specialised enough\n> > topic to deserve another thread; here I include a copy as a\n> > dependency. The main direct I/O patch is 0003.\n> \n> One thing to note: Currently, a request to aset above 8kB must go into a\n> dedicated block. Not sure if it's a coincidence that that matches the\n> default PG page size, but if allocating pages on the heap is hot enough,\n> maybe we should consider raising that limit. Although then, aligned-to-4kB\n> requests would result in 16kB chunks requested unless a different allocator\n> was used.\n\nWith one exception, there's only a small number of places that allocate pages\ndynamically and we only do it for a small number of buffers. So I don't think\nwe should worry too much about this for now.\n\nThe one exception to this: GetLocalBufferStorage(). But it already batches\nmemory allocations by increasing sizes, so I think we're good as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Nov 2022 18:50:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 11:54 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-02 09:44:30 +1300, Thomas Munro wrote:\n> > On Wed, Nov 2, 2022 at 2:33 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > On Tue, Nov 01, 2022 at 08:36:18PM +1300, Thomas Munro wrote:\n> > > > io_data_direct = whether to use O_DIRECT for main data files\n> > > > io_wal_direct = ... for WAL\n> > > > io_wal_init_direct = ... for WAL-file initialisation\n> > >\n> > > You added 3 booleans, but I wonder if it's better to add a string GUC\n> > > which is parsed for comma separated strings.\n\nDone as io_direct=data,wal,wal_init. Thanks Justin, this is better.\nI resisted the urge to invent a meaning for 'on' and 'off', mainly\nbecause it's not clear what values 'on' should enable and it'd be\nstrange to have off without on, so for now an empty string means off.\nI suppose the meaning of this string could evolve over time: the names\nof forks, etc.\n\n> Perhaps we could use the guc assignment hook to transform the input value into\n> a bitmask?\n\nMakes sense. The only tricky question was where to store the GUC. I\nwent for fd.c for now, but it doesn't seem quite right...\n\n> > > DIO is slower, but not so much that it can't run under CI. I suggest to\n> > > add an 099 commit to enable the feature during development.\n> >\n> > Good idea, will do.\n\nDone. The tests take 2-3x as long depending on the OS.\n\n> Might be worth to additionally have a short tap test that does some basic\n> stuff with DIO and leave that enabled? I think it'd be good to have\n> check-world exercise DIO on dev machines, to reduce the likelihood of finding\n> problems only in CI, which is somewhat painful.\n\nDone.\n\n> > > Note that this fails under linux with fsanitize=align:\n> > > ../src/backend/storage/file/buffile.c:117:17: runtime error: member access within misaligned address 0x561a4a8e40f8 for type 'struct BufFile', which requires 4096 byte alignment\n> >\n> > Oh, so BufFile is palloc'd and contains one of these. BufFile is not\n> > even using direct I/O, but by these rules it would need to be\n> > palloc_io_align'd. I will think about what to do about that...\n>\n> It might be worth having two different versions of the struct, so we don't\n> impose unnecessarily high alignment everywhere?\n\nDone. I now have PGAlignedBlock (unchanged) and PGIOAlignedBlock.\nYou have to use the latter for SMgr, because I added alignment\nassertions there. We might as well use it for any other I/O such as\nfrontend code too for a chance of a small performance boost as you\nshowed. For now I have not use PGIOAlignedBlock for BufFile, even\nthough it would be a great candidate for a potential speedup, only\nbecause I am afraid of adding padding to every BufFile in scenarios\nwhere we allocate many (could be avoided, a subject for separate\nresearch).\n\nV2 comprises:\n\n0001 -- David's palloc_aligned() patch\nhttps://commitfest.postgresql.org/41/3999/\n0002 -- I/O-align almost all buffers used for I/O\n0003 -- Add the GUCs\n0004 -- Throwaway hack to make cfbot turn the GUCs on",
"msg_date": "Wed, 14 Dec 2022 17:48:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 5:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 0001 -- David's palloc_aligned() patch https://commitfest.postgresql.org/41/3999/\n> 0002 -- I/O-align almost all buffers used for I/O\n> 0003 -- Add the GUCs\n> 0004 -- Throwaway hack to make cfbot turn the GUCs on\n\nDavid pushed the first as commit 439f6175, so here is a rebase of the\nrest. I also fixed a couple of thinkos in the handling of systems\nwhere we don't know how to do direct I/O. In one place I had #ifdef\nPG_O_DIRECT, but that's always defined, it's just that it's 0 on\nSolaris and OpenBSD, and the check to reject the GUC wasn't quite\nright on such systems.",
"msg_date": "Thu, 22 Dec 2022 15:04:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 7:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 5:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > 0001 -- David's palloc_aligned() patch https://commitfest.postgresql.org/41/3999/\n> > 0002 -- I/O-align almost all buffers used for I/O\n> > 0003 -- Add the GUCs\n> > 0004 -- Throwaway hack to make cfbot turn the GUCs on\n>\n> David pushed the first as commit 439f6175, so here is a rebase of the\n> rest. I also fixed a couple of thinkos in the handling of systems\n> where we don't know how to do direct I/O. In one place I had #ifdef\n> PG_O_DIRECT, but that's always defined, it's just that it's 0 on\n> Solaris and OpenBSD, and the check to reject the GUC wasn't quite\n> right on such systems.\n\nThanks. I have some comments on\nv3-0002-Add-io_direct-setting-developer-only.patch:\n\n1. I think we don't need to overwrite the io_direct_string in\ncheck_io_direct so that show_io_direct can be avoided.\n2. check_io_direct can leak the flags memory - when io_direct is not\nsupported or for an invalid list syntax or an invalid option is\nspecified.\n\nI have addressed my review comments as a delta patch on top of v3-0002\nand added it here as v1-0001-Review-comments-io_direct-GUC.txt.\n\nSome comments on the tests added:\n\n1. Is there a way to know if Direct IO for WAL and data has been\npicked up programmatically? IOW, can we know if the OS page cache is\nbypassed? I know an external extension pgfincore which can help here,\nbut nothing in the core exists AFAICS.\n+is('10000', $node->safe_psql('postgres', 'select count(*) from t1'),\n\"read back from shared\");\n+is('10000', $node->safe_psql('postgres', 'select * from t2count'),\n\"read back from local\");\n+$node->stop('immediate');\n\n2. Can we combine these two append_conf to a single statement?\n+$node->append_conf('io_direct', 'data,wal,wal_init');\n+$node->append_conf('shared_buffers', '64kB'); # tiny to force I/O\n\n3. A nitpick: Can we split these queries multi-line instead of in a single line?\n+$node->safe_psql('postgres', 'begin; create temporary table t2 as\nselect 1 as i from generate_series(1, 10000); update t2 set i = i;\ninsert into t2count select count(*) from t2; commit;');\n\n4. I don't think we need to stop the node before the test ends, no?\n+$node->stop;\n+\n+done_testing();\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 25 Jan 2023 13:27:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 8:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. I have some comments on\n> v3-0002-Add-io_direct-setting-developer-only.patch:\n>\n> 1. I think we don't need to overwrite the io_direct_string in\n> check_io_direct so that show_io_direct can be avoided.\n\nThanks for looking at this, and sorry for the late response. Yeah, agreed.\n\n> 2. check_io_direct can leak the flags memory - when io_direct is not\n> supported or for an invalid list syntax or an invalid option is\n> specified.\n>\n> I have addressed my review comments as a delta patch on top of v3-0002\n> and added it here as v1-0001-Review-comments-io_direct-GUC.txt.\n\nThanks. Your way is nicer. I merged your patch and added you as a co-author.\n\n> Some comments on the tests added:\n>\n> 1. Is there a way to know if Direct IO for WAL and data has been\n> picked up programmatically? IOW, can we know if the OS page cache is\n> bypassed? I know an external extension pgfincore which can help here,\n> but nothing in the core exists AFAICS.\n\nRight, that extension can tell you how many pages are in the kernel\npage cache which is quite interesting for this. I also once hacked up\nsomething primitive to see *which* pages are in kernel cache, so I\ncould join that against pg_buffercache to measure double buffering,\nwhen I was studying the \"smile\" shape where pgbench TPS goes down and\nthen back up again as you increase shared_buffers if the working set\nis nearly as big as physical memory (code available in a link from\n[1]).\n\nYeah, I agree it might be nice for human investigators to put\nsomething like that in contrib/pg_buffercache, but I'm not sure you\ncould rely on it enough for an automated test, though, 'cause it\nprobably won't work on some file systems and the tests would probably\nfail for random transient reasons (for example: some systems won't\nkick pages out of kernel cache if they were already there, just\nbecause we decided to open the file with O_DIRECT). (I got curious\nabout why mincore() wasn't standardised along with mmap() and all that\njazz; it seems the BSD and later Sun people who invented all those\ninterfaces didn't think that one was quite good enough[2], but every\n(?) Unixoid OS copied it anyway, with variations... Apparently the\nWindows thing is called VirtualQuery()).\n\n> 2. Can we combine these two append_conf to a single statement?\n> +$node->append_conf('io_direct', 'data,wal,wal_init');\n> +$node->append_conf('shared_buffers', '64kB'); # tiny to force I/O\n\nOK, sure, done. And also oops, that was completely wrong and not\nworking the way I had it in that version...\n\n> 3. A nitpick: Can we split these queries multi-line instead of in a single line?\n> +$node->safe_psql('postgres', 'begin; create temporary table t2 as\n> select 1 as i from generate_series(1, 10000); update t2 set i = i;\n> insert into t2count select count(*) from t2; commit;');\n\nOK.\n\n> 4. I don't think we need to stop the node before the test ends, no?\n> +$node->stop;\n> +\n> +done_testing();\n\nSure, but why not?\n\nOtherwise, I rebased, and made a couple more changes:\n\nI found a line of the manual about wal_sync_method that needed to be removed:\n\n- The <literal>open_</literal>* options also use\n<literal>O_DIRECT</literal> if available.\n\nIn fact that sentence didn't correctly document the behaviour in\nreleased branches (wal_level=minimal is also required for that, so\nprobably very few people ever used it). I think we should adjust that\nmisleading sentence in back-branches, separately from this patch set.\n\nI also updated the commit message to highlight the only expected\nuser-visible change for this, namely the loss of the above incorrectly\ndocumented obscure special case, which is replaced by the less obscure\nnew setting io_direct=wal, if someone still wants that behaviour.\n\nAlso a few minor comment changes.\n\n[1] https://twitter.com/MengTangmu/status/994770040745615361\n[2] http://kos.enix.org/pub/gingell8.pdf",
"msg_date": "Sat, 8 Apr 2023 00:35:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "I did some testing with non-default block sizes, and found a few minor\nthings that needed adjustment. The short version is that I blocked\nsome configurations that won't work or would break an assertion.\nAfter a bit more copy-editing on docs and comments and a round of\nautomated indenting, I have now pushed this. I will now watch the\nbuild farm. I tested on quite a few OSes that I have access to, but\nthis is obviously a very OS-sensitive kind of a thing.\n\nThe adjustments were:\n\n1. If you set your BLCKSZ or XLOG_BLCKSZ smaller than\nPG_IO_ALIGN_SIZE, you shouldn't be allowed to turn on direct I/O for\nthe relevant operations, because such undersized direct I/Os will fail\non common systems.\n\nFATAL: invalid value for parameter \"io_direct\": \"wal\"\nDETAIL: io_direct is not supported for WAL because XLOG_BLCKSZ is too small\n\nFATAL: invalid value for parameter \"io_direct\": \"data\"\nDETAIL: io_direct is not supported for data because BLCKSZ is too small\n\nIn fact some systems would be OK with it if the true requirement is\n512 not 4096, but (1) tiny blocks are a niche build option that\ndoesn't even pass regression tests and (2) it's hard and totally\nunportable to find out the true requirement at runtime, and (3) the\nconservative choice of 4096 has additional benefits by matching memory\npages. So I think a conservative compile-time number is a good\nstarting position.\n\n2. Previously I had changed the WAL buffer alignment to be the larger\nof PG_IO_ALIGN_SIZE and XLOG_BLCKSZ, but in light of the above\nthinking, I reverted that part (no point in aligning the address of\nthe buffer when the size is too small for direct I/O, but now that\ncombination is blocked off at GUC level so we don't need any change\nhere).\n\n3. I updated the md.c alignment assertions to allow for tiny blocks.\nThe point of these assertions is to fail if any new code does I/O from\nbadly aligned buffers even with io_direct turned off (ie how most\npeople hack), 'cause that will fail with io_direct turned on. The\nchange is that I don't make the assertion if you're using BLCKSZ <\nPG_IO_ALIGN_SIZE. Such buffers wouldn't work if used for direct I/O\nbut that's OK, the GUC won't allow it.\n\n4. I made the language to explain where PG_IO_ALIGN_SIZE really comes\nfrom a little vaguer because it's complex.\n\n\n",
"msg_date": "Sat, 8 Apr 2023 16:47:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 4:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> After a bit more copy-editing on docs and comments and a round of\n> automated indenting, I have now pushed this. I will now watch the\n> build farm. I tested on quite a few OSes that I have access to, but\n> this is obviously a very OS-sensitive kind of a thing.\n\nHmm. I see a strange \"invalid page\" failure on Andrew's machine crake\nin 004_io_direct.pl. Let's see what else comes in.\n\n\n",
"msg_date": "Sat, 8 Apr 2023 16:59:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I did some testing with non-default block sizes, and found a few minor\n> things that needed adjustment.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-04-08%2004%3A42%3A04\n\nThis seems like another thing that should not have been pushed mere\nhours before feature freeze.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 01:03:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-08 16:59:20 +1200, Thomas Munro wrote:\n> On Sat, Apr 8, 2023 at 4:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > After a bit more copy-editing on docs and comments and a round of\n> > automated indenting, I have now pushed this. I will now watch the\n> > build farm. I tested on quite a few OSes that I have access to, but\n> > this is obviously a very OS-sensitive kind of a thing.\n> \n> Hmm. I see a strange \"invalid page\" failure on Andrew's machine crake\n> in 004_io_direct.pl. Let's see what else comes in.\n\nThere were some failures in CI (e.g. [1] (and perhaps also bf, didn't yet\ncheck), about \"no unpinned buffers available\". I was worried for a moment\nthat this could actually be relation to the bulk extension patch.\n\nBut it looks like it's older - and not caused by direct_io support (except by\nway of the test existing). I reproduced the issue locally by setting s_b even\nlower, to 16 and made the ERROR a PANIC.\n\n#4 0x00005624dfe90336 in errfinish (filename=0x5624df6867c0 \"../../../../home/andres/src/postgresql/src/backend/storage/buffer/freelist.c\", lineno=353, \n funcname=0x5624df686900 <__func__.6> \"StrategyGetBuffer\") at ../../../../home/andres/src/postgresql/src/backend/utils/error/elog.c:604\n#5 0x00005624dfc71dbe in StrategyGetBuffer (strategy=0x0, buf_state=0x7ffd4182137c, from_ring=0x7ffd4182137b)\n at ../../../../home/andres/src/postgresql/src/backend/storage/buffer/freelist.c:353\n#6 0x00005624dfc6a922 in GetVictimBuffer (strategy=0x0, io_context=IOCONTEXT_NORMAL)\n at ../../../../home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:1601\n#7 0x00005624dfc6a29f in BufferAlloc (smgr=0x5624e1ff27f8, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=16, strategy=0x0, foundPtr=0x7ffd418214a3, \n io_context=IOCONTEXT_NORMAL) at ../../../../home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:1290\n#8 0x00005624dfc69c0c in ReadBuffer_common (smgr=0x5624e1ff27f8, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=16, mode=RBM_NORMAL, strategy=0x0, \n hit=0x7ffd4182156b) at ../../../../home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:1056\n#9 0x00005624dfc69335 in ReadBufferExtended (reln=0x5624e1ee09f0, forkNum=MAIN_FORKNUM, blockNum=16, mode=RBM_NORMAL, strategy=0x0)\n at ../../../../home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:776\n#10 0x00005624df8eb78a in log_newpage_range (rel=0x5624e1ee09f0, forknum=MAIN_FORKNUM, startblk=0, endblk=45, page_std=false)\n at ../../../../home/andres/src/postgresql/src/backend/access/transam/xloginsert.c:1290\n#11 0x00005624df9567e7 in smgrDoPendingSyncs (isCommit=true, isParallelWorker=false)\n at ../../../../home/andres/src/postgresql/src/backend/catalog/storage.c:837\n#12 0x00005624df8d1dd2 in CommitTransaction () at ../../../../home/andres/src/postgresql/src/backend/access/transam/xact.c:2225\n#13 0x00005624df8d2da2 in CommitTransactionCommand () at ../../../../home/andres/src/postgresql/src/backend/access/transam/xact.c:3060\n#14 0x00005624dfcbe0a1 in finish_xact_command () at ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:2779\n#15 0x00005624dfcbb867 in exec_simple_query (query_string=0x5624e1eacd98 \"create table t1 as select 1 as i from generate_series(1, 10000)\")\n at ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:1299\n#16 0x00005624dfcc09c4 in PostgresMain (dbname=0x5624e1ee40e8 \"postgres\", username=0x5624e1e6c5f8 \"andres\")\n at ../../../../home/andres/src/postgresql/src/backend/tcop/postgres.c:4623\n#17 0x00005624dfbecc03 in BackendRun (port=0x5624e1ed8250) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4461\n#18 0x00005624dfbec48e in BackendStartup (port=0x5624e1ed8250) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:4189\n#19 0x00005624dfbe8541 in ServerLoop () at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1779\n#20 0x00005624dfbe7e56 in PostmasterMain (argc=4, argv=0x5624e1e6a520) at ../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1463\n#21 0x00005624dfad538b in main (argc=4, argv=0x5624e1e6a520) at ../../../../home/andres/src/postgresql/src/backend/main/main.c:200\n\n\nIf you look at log_newpage_range(), it's not surprising that we get this error\n- it pins up to 32 buffers at once.\n\nAfaics log_newpage_range() originates in 9155580fd5fc, but this caller is from\nc6b92041d385.\n\n\nIt doesn't really seem OK to me to unconditionally pin 32 buffers. For the\nrelation extension patch I introduced LimitAdditionalPins() to deal with this\nconcern. Perhaps it needs to be exposed and log_newpage_buffers() should use\nit?\n\n\nDo we care about fixing this in the backbranches? Probably not, given there\nhaven't been user complaints?\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://cirrus-ci.com/task/4519721039560704\n\n\n",
"msg_date": "Fri, 7 Apr 2023 23:04:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 4:59 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Apr 8, 2023 at 4:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > After a bit more copy-editing on docs and comments and a round of\n> > automated indenting, I have now pushed this. I will now watch the\n> > build farm. I tested on quite a few OSes that I have access to, but\n> > this is obviously a very OS-sensitive kind of a thing.\n>\n> Hmm. I see a strange \"invalid page\" failure on Andrew's machine crake\n> in 004_io_direct.pl. Let's see what else comes in.\n\nNo particular ideas about what happened there yet. It *looks* like we\nwrote out a page, and then read it back in very soon afterwards, all\nvia the usual locked bufmgr/smgr pathways, and it failed basic page\nvalidation. The reader was a parallel worker, because of the\ndebug_parallel_mode setting on that box. The page number looks\nreasonable (I guess it's reading a page created by the UPDATE full of\nnew tuples, but I don't know which process wrote it). It's also not\nimmediately obvious how this could be connected to the 32 pinned\nbuffer problem (all that would have happened in the CREATE TABLE\nprocess which ended already before the UPDATE and then the SELECT\nbackends even started).\n\nAndrew, what file system and type of disk is that machine using?\n\n\n",
"msg_date": "Sat, 8 Apr 2023 21:25:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 23:04:08 -0700, Andres Freund wrote:\n> There were some failures in CI (e.g. [1] (and perhaps also bf, didn't yet\n> check), about \"no unpinned buffers available\". I was worried for a moment\n> that this could actually be relation to the bulk extension patch.\n> \n> But it looks like it's older - and not caused by direct_io support (except by\n> way of the test existing). I reproduced the issue locally by setting s_b even\n> lower, to 16 and made the ERROR a PANIC.\n>\n> [backtrace]\n> \n> If you look at log_newpage_range(), it's not surprising that we get this error\n> - it pins up to 32 buffers at once.\n> \n> Afaics log_newpage_range() originates in 9155580fd5fc, but this caller is from\n> c6b92041d385.\n> \n> \n> It doesn't really seem OK to me to unconditionally pin 32 buffers. For the\n> relation extension patch I introduced LimitAdditionalPins() to deal with this\n> concern. Perhaps it needs to be exposed and log_newpage_buffers() should use\n> it?\n> \n> \n> Do we care about fixing this in the backbranches? Probably not, given there\n> haven't been user complaints?\n\nHere's a quick prototype of this approach. If we expose LimitAdditionalPins(),\nwe'd probably want to add \"Buffer\" to the name, and pass it a relation, so\nthat it can hand off LimitAdditionalLocalPins() when appropriate? The callsite\nin question doesn't need it, but ...\n\nWithout the limiting of pins the modified 004_io_direct.pl fails 100% of the\ntime for me.\n\nPresumably the reason it fails occasionally with 256kB of shared buffers\n(i.e. NBuffers=32) is that autovacuum or checkpointer briefly pins a single\nbuffer. As log_newpage_range() thinks it can just pin 32 buffers\nunconditionally, it fails in that case.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 8 Apr 2023 11:08:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nGiven the frequency of failures on this in the buildfarm, I propose using the\ntemporary workaround of using wal_level=replica. That avoids the use of the\nover-eager log_newpage_range().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Apr 2023 11:55:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 6:55 AM Andres Freund <andres@anarazel.de> wrote:\n> Given the frequency of failures on this in the buildfarm, I propose using the\n> temporary workaround of using wal_level=replica. That avoids the use of the\n> over-eager log_newpage_range().\n\nWill do.\n\n\n",
"msg_date": "Sun, 9 Apr 2023 08:13:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Apr 9, 2023 at 6:55 AM Andres Freund <andres@anarazel.de> wrote:\n>> Given the frequency of failures on this in the buildfarm, I propose using the\n>> temporary workaround of using wal_level=replica. That avoids the use of the\n>> over-eager log_newpage_range().\n\n> Will do.\n\nNow crake is doing this:\n\n2023-04-08 16:50:03.177 EDT [2023-04-08 16:50:03 EDT 3257645:3] 004_io_direct.pl LOG: statement: select count(*) from t1\n2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:1] ERROR: invalid page in block 56 of relation base/5/16384\n2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:2] STATEMENT: select count(*) from t1\n2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:4] 004_io_direct.pl ERROR: invalid page in block 56 of relation base/5/16384\n2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:5] 004_io_direct.pl STATEMENT: select count(*) from t1\n2023-04-08 16:50:03.319 EDT [2023-04-08 16:50:02 EDT 3257591:4] LOG: background worker \"parallel worker\" (PID 3257646) exited with exit code 1\n\nThe fact that the error is happening in a parallel worker seems\ninteresting ...\n\n(BTW, why are the log lines doubly timestamped?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 17:10:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 9:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 2023-04-08 16:50:03.177 EDT [2023-04-08 16:50:03 EDT 3257645:3] 004_io_direct.pl LOG: statement: select count(*) from t1\n> 2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:1] ERROR: invalid page in block 56 of relation base/5/16384\n\n> The fact that the error is happening in a parallel worker seems\n> interesting ...\n\nThat's because it's running with debug_parallel_query=regress. I've\nbeen trying to repro that but no luck... A different kind of failure\nalso showed up, where it counted the wrong number of tuples:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2023-04-08%2015%3A52%3A03\n\nA paranoid explanation would be that this system is failing to provide\nbasic I/O coherency, we're writing pages out and not reading them back\nin. Or of course there is a dumb bug... but why only here? Can of\ncourse be timing-sensitive and it's interesting that crake suffers\nfrom the \"no unpinned buffers available\" thing (which should now be\ngone) with higher frequency; I'm keen to see if the dodgy-read problem\ncontinues with a similar frequency now.\n\n\n",
"msg_date": "Sun, 9 Apr 2023 09:15:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-08 17:10:19 -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> Now crake is doing this:\n> \n> 2023-04-08 16:50:03.177 EDT [2023-04-08 16:50:03 EDT 3257645:3] 004_io_direct.pl LOG: statement: select count(*) from t1\n> 2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:1] ERROR: invalid page in block 56 of relation base/5/16384\n> 2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:2] STATEMENT: select count(*) from t1\n> 2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:4] 004_io_direct.pl ERROR: invalid page in block 56 of relation base/5/16384\n> 2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:5] 004_io_direct.pl STATEMENT: select count(*) from t1\n> 2023-04-08 16:50:03.319 EDT [2023-04-08 16:50:02 EDT 3257591:4] LOG: background worker \"parallel worker\" (PID 3257646) exited with exit code 1\n> \n> The fact that the error is happening in a parallel worker seems\n> interesting ...\n\nThere were a few prior instances of that error. One that I hadn't seen before\nis this:\n\n[11:35:07.190](0.001s) # Failed test 'read back from shared'\n# at /home/andrew/bf/root/HEAD/pgsql/src/test/modules/test_misc/t/004_io_direct.pl line 43.\n[11:35:07.190](0.000s) # got: '10000'\n# expected: '10098'\n\nFor one it points to the arguments to is() being switched around, but that's a\nsideshow.\n\n\n> (BTW, why are the log lines doubly timestamped?)\n\nIt's odd.\n\nIt's also odd that it's just crake having the issue. It's just a linux host,\nafaics. Andrew, is there any chance you can run that test in isolation and see\nwhether it reproduces? If so, does the problem vanish, if you comment out the\nio_direct= in the test? Curious whether this is actually an O_DIRECT issue, or\nwhether it's an independent issue exposed by the new test.\n\n\nI wonder if we should make the test use data checksum - if we continue to see\nthe wrong query results, the corruption is more likely to be in memory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Apr 2023 14:23:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-08 17:10:19 -0400, Tom Lane wrote:\n>> (BTW, why are the log lines doubly timestamped?)\n\n> It's odd.\n\nOh, I guess that's intentional, because crake has\n\n 'log_line_prefix = \\'%m [%s %p:%l] %q%a \\'',\n\n> It's also odd that it's just crake having the issue. It's just a linux host,\n> afaics.\n\nIndeed. I'm guessing from the compiler version that it's Fedora 37 now\n(the lack of such basic information in the meson configuration output\nis pretty annoying). I've been trying to repro it here on an F37 box,\nwith no success, suggesting that it's very timing sensitive. Or maybe\nit's inside a VM and that matters?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 17:31:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-08 17:31:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-04-08 17:10:19 -0400, Tom Lane wrote:\n> > It's also odd that it's just crake having the issue. It's just a linux host,\n> > afaics.\n> \n> Indeed. I'm guessing from the compiler version that it's Fedora 37 now\n\nThe 15 branch says:\n\nhostname = neoemma\nuname -m = x86_64\nuname -r = 6.2.8-100.fc36.x86_64\nuname -s = Linux\nuname -v = #1 SMP PREEMPT_DYNAMIC Wed Mar 22 19:14:19 UTC 2023\n\nSo at least the kernel claims to be 36...\n\n\n> (the lack of such basic information in the meson configuration output\n> is pretty annoying).\n\nYea, I was thinking yesterday that we should add uname output to meson's\nconfigure (if available). I'm sure we can figure out a reasonably fast windows\ncommand for the version, too.\n\n\n> I've been trying to repro it here on an F37 box, with no success, suggesting\n> that it's very timing sensitive. Or maybe it's inside a VM and that\n> matters?\n\nCould also be filesystem specific?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Apr 2023 14:42:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-08 Sa 17:42, Andres Freund wrote:\n> Hi,\n>\n> On 2023-04-08 17:31:02 -0400, Tom Lane wrote:\n>> Andres Freund<andres@anarazel.de> writes:\n>>> On 2023-04-08 17:10:19 -0400, Tom Lane wrote:\n>>> It's also odd that it's just crake having the issue. It's just a linux host,\n>>> afaics.\n>> Indeed. I'm guessing from the compiler version that it's Fedora 37 now\n> The 15 branch says:\n>\n> hostname = neoemma\n> uname -m = x86_64\n> uname -r = 6.2.8-100.fc36.x86_64\n> uname -s = Linux\n> uname -v = #1 SMP PREEMPT_DYNAMIC Wed Mar 22 19:14:19 UTC 2023\n>\n> So at least the kernel claims to be 36...\n>\n>\n>> (the lack of such basic information in the meson configuration output\n>> is pretty annoying).\n> Yea, I was thinking yesterday that we should add uname output to meson's\n> configure (if available). I'm sure we can figure out a reasonably fast windows\n> command for the version, too.\n>\n>\n>> I've been trying to repro it here on an F37 box, with no success, suggesting\n>> that it's very timing sensitive. Or maybe it's inside a VM and that\n>> matters?\n> Could also be filesystem specific?\n>\n\nI migrated it in February from a VM to a non-virtual instance. Almost \nnothing else runs on the machine. The personality info shown on the BF \nserver is correct.\n\nandrew@neoemma:~ $ cat /etc/fedora-release\nFedora release 36 (Thirty Six)\nandrew@neoemma:~ $ uname -a\nLinux neoemma 6.2.8-100.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Mar 22 \n19:14:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux\nandrew@neoemma:~ $ gcc --version\ngcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)\nandrew@neoemma:~ $ mount | grep home\n/dev/mapper/luks-xxxxxxx on /home type btrfs \n(rw,relatime,seclabel,compress=zstd:1,ssd,discard=async,space_cache,subvolid=256,subvol=/home)\n\n\nI guess it could be btrfs-specific. I'll be somewhat annoyed if I have \nto re-init the machine to use something else.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-08 Sa 17:42, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-04-08 17:31:02 -0400, Tom Lane wrote:\n\n\nAndres Freund <andres@anarazel.de> writes:\n\n\nOn 2023-04-08 17:10:19 -0400, Tom Lane wrote:\nIt's also odd that it's just crake having the issue. It's just a linux host,\nafaics.\n\n\n\nIndeed. I'm guessing from the compiler version that it's Fedora 37 now\n\n\n\nThe 15 branch says:\n\nhostname = neoemma\nuname -m = x86_64\nuname -r = 6.2.8-100.fc36.x86_64\nuname -s = Linux\nuname -v = #1 SMP PREEMPT_DYNAMIC Wed Mar 22 19:14:19 UTC 2023\n\nSo at least the kernel claims to be 36...\n\n\n\n\n(the lack of such basic information in the meson configuration output\nis pretty annoying).\n\n\n\nYea, I was thinking yesterday that we should add uname output to meson's\nconfigure (if available). I'm sure we can figure out a reasonably fast windows\ncommand for the version, too.\n\n\n\n\nI've been trying to repro it here on an F37 box, with no success, suggesting\nthat it's very timing sensitive. Or maybe it's inside a VM and that\nmatters?\n\n\n\nCould also be filesystem specific?\n\n\n\n\n\nI migrated it in February from a VM to a non-virtual instance.\n Almost nothing else runs on the machine. The personality info\n shown on the BF server is correct.\n\nandrew@neoemma:~ $ cat /etc/fedora-release \n Fedora release 36 (Thirty Six)\n andrew@neoemma:~ $ uname -a\n Linux neoemma 6.2.8-100.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Mar\n 22 19:14:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux\n andrew@neoemma:~ $ gcc --version\n gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)\n andrew@neoemma:~ $ mount | grep home\n /dev/mapper/luks-xxxxxxx on /home type btrfs\n(rw,relatime,seclabel,compress=zstd:1,ssd,discard=async,space_cache,subvolid=256,subvol=/home)\n\n\nI guess it could be btrfs-specific. I'll be somewhat annoyed if I\n have to re-init the machine to use something else.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 8 Apr 2023 18:08:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 10:08 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> btrfs\n\nAha!\n\n\n",
"msg_date": "Sun, 9 Apr 2023 10:10:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-08 Sa 17:23, Andres Freund wrote:\n> Hi,\n>\n> On 2023-04-08 17:10:19 -0400, Tom Lane wrote:\n>> Thomas Munro<thomas.munro@gmail.com> writes:\n>> Now crake is doing this:\n>>\n>> 2023-04-08 16:50:03.177 EDT [2023-04-08 16:50:03 EDT 3257645:3] 004_io_direct.pl LOG: statement: select count(*) from t1\n>> 2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:1] ERROR: invalid page in block 56 of relation base/5/16384\n>> 2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:2] STATEMENT: select count(*) from t1\n>> 2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:4] 004_io_direct.pl ERROR: invalid page in block 56 of relation base/5/16384\n>> 2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:5] 004_io_direct.pl STATEMENT: select count(*) from t1\n>> 2023-04-08 16:50:03.319 EDT [2023-04-08 16:50:02 EDT 3257591:4] LOG: background worker \"parallel worker\" (PID 3257646) exited with exit code 1\n>>\n>> The fact that the error is happening in a parallel worker seems\n>> interesting ...\n> There were a few prior instances of that error. One that I hadn't seen before\n> is this:\n>\n> [11:35:07.190](0.001s) # Failed test 'read back from shared'\n> # at /home/andrew/bf/root/HEAD/pgsql/src/test/modules/test_misc/t/004_io_direct.pl line 43.\n> [11:35:07.190](0.000s) # got: '10000'\n> # expected: '10098'\n>\n> For one it points to the arguments to is() being switched around, but that's a\n> sideshow.\n>\n>\n> It's also odd that it's just crake having the issue. It's just a linux host,\n> afaics. Andrew, is there any chance you can run that test in isolation and see\n> whether it reproduces? If so, does the problem vanish, if you comment out the\n> io_direct= in the test? Curious whether this is actually an O_DIRECT issue, or\n> whether it's an independent issue exposed by the new test.\n>\n>\n> I wonder if we should make the test use data checksum - if we continue to see\n> the wrong query results, the corruption is more likely to be in memory.\n>\n\nI can run the test in isolation, and it's get an error reliably.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-08 Sa 17:23, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-04-08 17:10:19 -0400, Tom Lane wrote:\n\n\nThomas Munro <thomas.munro@gmail.com> writes:\nNow crake is doing this:\n\n2023-04-08 16:50:03.177 EDT [2023-04-08 16:50:03 EDT 3257645:3] 004_io_direct.pl LOG: statement: select count(*) from t1\n2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:1] ERROR: invalid page in block 56 of relation base/5/16384\n2023-04-08 16:50:03.316 EDT [2023-04-08 16:50:03 EDT 3257646:2] STATEMENT: select count(*) from t1\n2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:4] 004_io_direct.pl ERROR: invalid page in block 56 of relation base/5/16384\n2023-04-08 16:50:03.317 EDT [2023-04-08 16:50:03 EDT 3257645:5] 004_io_direct.pl STATEMENT: select count(*) from t1\n2023-04-08 16:50:03.319 EDT [2023-04-08 16:50:02 EDT 3257591:4] LOG: background worker \"parallel worker\" (PID 3257646) exited with exit code 1\n\nThe fact that the error is happening in a parallel worker seems\ninteresting ...\n\n\n\nThere were a few prior instances of that error. One that I hadn't seen before\nis this:\n\n[11:35:07.190](0.001s) # Failed test 'read back from shared'\n# at /home/andrew/bf/root/HEAD/pgsql/src/test/modules/test_misc/t/004_io_direct.pl line 43.\n[11:35:07.190](0.000s) # got: '10000'\n# expected: '10098'\n\nFor one it points to the arguments to is() being switched around, but that's a\nsideshow.\n\n\n\n\nIt's also odd that it's just crake having the issue. It's just a linux host,\nafaics. Andrew, is there any chance you can run that test in isolation and see\nwhether it reproduces? If so, does the problem vanish, if you comment out the\nio_direct= in the test? Curious whether this is actually an O_DIRECT issue, or\nwhether it's an independent issue exposed by the new test.\n\n\nI wonder if we should make the test use data checksum - if we continue to see\nthe wrong query results, the corruption is more likely to be in memory.\n\n\n\n\n\nI can run the test in isolation, and it's get an error reliably.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 8 Apr 2023 18:17:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 10:17 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I can run the test in isolation, and it's get an error reliably.\n\nRandom idea: it looks like you have compression enabled. What if you\nturn it off in the directory where the test runs? Something like\nbtrfs property set <file> compression ... according to the\nintergoogles. (I have never used btrfs before 6 minutes ago but I\ncan't seem to repro this with basic settings in a loopback btrfs\nfilesystems).\n\n\n",
"msg_date": "Sun, 9 Apr 2023 10:50:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Apr 9, 2023 at 10:08 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> btrfs\n\n> Aha!\n\nGoogling finds a lot of suggestions that O_DIRECT doesn't play nice\nwith btrfs, for example\n\nhttps://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg92824.html\n\nIt's not clear to me how much of that lore is still current,\nbut it's disturbing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 19:05:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Googling finds a lot of suggestions that O_DIRECT doesn't play nice\n> with btrfs, for example\n>\n> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg92824.html\n>\n> It's not clear to me how much of that lore is still current,\n> but it's disturbing.\n\nI think that particular thing might relate to modifications of the\nuser buffer while a write is in progress (breaking btrfs's internal\nchecksums). I don't think we should ever do that ourselves (not least\nbecause it'd break our own checksums). We lock the page during the\nwrite so no one can do that, and then we sleep in a synchronous\nsyscall.\n\nHere's something recent. I guess it's probably not relevant (a fault\non our buffer that we recently touched sounds pretty unlikely), but\nwho knows... (developer lists for file systems are truly terrifying\nplaces to drive through).\n\nhttps://lore.kernel.org/linux-btrfs/20230315195231.GW10580@twin.jikos.cz/T/\n\nIt's odd, though, if it is their bug and not ours: I'd expect our\nfriends in other databases to have hit all that sort of thing years\nago, since many comparable systems have a direct I/O knob*. What are\nwe doing differently? Are our multiple processes a factor here,\nbreaking some coherency logic? Unsurprisingly, having compression on\nas Andrew does actually involves buffering anyway[1] despite our\nO_DIRECT flag, but maybe that's saying writes are buffered but reads\nare still direct (?), which sounds like the sort of initial conditions\nthat might produce a coherency bug. I dunno.\n\nI gather that btrfs is actually Fedora's default file system (or maybe\nit's just \"laptops and desktops\"[2]?). I wonder if any of the several\ngreen Fedora systems in the 'farm are using btrfs. I wonder if they\nare using different mount options (thinking again of compression).\n\n*Probably a good reason to add a more prominent warning that the\nfeature is developer-only, experimental and not for production use.\nI'm thinking a warning at startup or something.\n\n[1] https://btrfs.readthedocs.io/en/latest/Compression.html\n[2] https://fedoraproject.org/wiki/Changes/BtrfsByDefault\n\n\n",
"msg_date": "Sun, 9 Apr 2023 13:55:33 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> It's odd, though, if it is their bug and not ours: I'd expect our\n> friends in other databases to have hit all that sort of thing years\n> ago, since many comparable systems have a direct I/O knob*.\n\nYeah, it seems moderately likely that it's our own bug ... but this\ncode's all file-system-ignorant, so how? Maybe we are breaking some\nPOSIX rule that btrfs exploits but others don't?\n\n> I gather that btrfs is actually Fedora's default file system (or maybe\n> it's just \"laptops and desktops\"[2]?).\n\nI have a ton of Fedora images laying about, and I doubt that any of them\nuse btrfs, mainly because that's not the default in the \"server spin\"\nwhich is what I usually install from. It's hard to guess about the\nbuildfarm, but it wouldn't surprise me that most of them are on xfs.\n(If we haven't figured this out pretty shortly, I'm probably going to\nput together a btrfs-on-bare-metal machine to try to duplicate crake's\nresults.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 22:05:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-09 13:55:33 +1200, Thomas Munro wrote:\n> I think that particular thing might relate to modifications of the\n> user buffer while a write is in progress (breaking btrfs's internal\n> checksums). I don't think we should ever do that ourselves (not least\n> because it'd break our own checksums). We lock the page during the\n> write so no one can do that, and then we sleep in a synchronous\n> syscall.\n\nOh, but we actually *do* modify pages while IO is going on. I wonder if you\nhit the jack pot here. The content lock doesn't prevent hint bit\nwrites. That's why we copy the page to temporary memory when computing\nchecksums.\n\nI think we should modify the test to enable checksums - if the problem goes\naway, then it's likely to be related to modifying pages while an O_DIRECT\nwrite is ongoing...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Apr 2023 19:18:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 2:18 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-04-09 13:55:33 +1200, Thomas Munro wrote:\n> > I think that particular thing might relate to modifications of the\n> > user buffer while a write is in progress (breaking btrfs's internal\n> > checksums). I don't think we should ever do that ourselves (not least\n> > because it'd break our own checksums). We lock the page during the\n> > write so no one can do that, and then we sleep in a synchronous\n> > syscall.\n>\n> Oh, but we actually *do* modify pages while IO is going on. I wonder if you\n> hit the jack pot here. The content lock doesn't prevent hint bit\n> writes. That's why we copy the page to temporary memory when computing\n> checksums.\n\nMore like the jackpot hit me.\n\nWoo, I can now reproduce this locally on a loop filesystem.\nPreviously I had missed a step, the parallel worker seems to be\nnecessary. More soon.\n\n\n",
"msg_date": "Sun, 9 Apr 2023 14:56:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 08, 2023 at 11:08:16AM -0700, Andres Freund wrote:\n> On 2023-04-07 23:04:08 -0700, Andres Freund wrote:\n> > There were some failures in CI (e.g. [1] (and perhaps also bf, didn't yet\n> > check), about \"no unpinned buffers available\". I was worried for a moment\n> > that this could actually be relation to the bulk extension patch.\n> > \n> > But it looks like it's older - and not caused by direct_io support (except by\n> > way of the test existing). I reproduced the issue locally by setting s_b even\n> > lower, to 16 and made the ERROR a PANIC.\n> >\n> > [backtrace]\n\nI get an ERROR, not a PANIC:\n\n$ git rev-parse HEAD\n2e57ffe12f6b5c1498f29cb7c0d9e17c797d9da6\n$ git diff -U0\ndiff --git a/src/test/modules/test_misc/t/004_io_direct.pl b/src/test/modules/test_misc/t/004_io_direct.pl\nindex f5bf0b1..8f0241b 100644\n--- a/src/test/modules/test_misc/t/004_io_direct.pl\n+++ b/src/test/modules/test_misc/t/004_io_direct.pl\n@@ -25 +25 @@ io_direct = 'data,wal,wal_init'\n-shared_buffers = '256kB' # tiny to force I/O\n+shared_buffers = 16\n$ ./configure -C --enable-debug --enable-cassert --enable-depend --enable-tap-tests --with-tcl --with-python --with-perl\n$ make -C src/test/modules/test_misc check PROVE_TESTS=t/004_io_direct.pl\n# +++ tap check in src/test/modules/test_misc +++\nt/004_io_direct.pl .. Dubious, test returned 29 (wstat 7424, 0x1d00)\nNo subtests run \n\nTest Summary Report\n-------------------\nt/004_io_direct.pl (Wstat: 7424 Tests: 0 Failed: 0)\n Non-zero exit status: 29\n Parse errors: No plan found in TAP output\nFiles=1, Tests=0, 1 wallclock secs ( 0.01 usr 0.00 sys + 0.41 cusr 0.14 csys = 0.56 CPU)\nResult: FAIL\nmake: *** [../../../../src/makefiles/pgxs.mk:460: check] Error 1\n$ grep pinned src/test/modules/test_misc/tmp_check/log/*\nsrc/test/modules/test_misc/tmp_check/log/004_io_direct_main.log:2023-04-08 21:12:46.781 PDT [929628] 004_io_direct.pl ERROR: no unpinned buffers available\nsrc/test/modules/test_misc/tmp_check/log/regress_log_004_io_direct:error running SQL: 'psql:<stdin>:1: ERROR: no unpinned buffers available'\n\nNo good reason to PANIC there, so the path to PANIC may be fixable.\n\n> > If you look at log_newpage_range(), it's not surprising that we get this error\n> > - it pins up to 32 buffers at once.\n> > \n> > Afaics log_newpage_range() originates in 9155580fd5fc, but this caller is from\n> > c6b92041d385.\n\n> > Do we care about fixing this in the backbranches? Probably not, given there\n> > haven't been user complaints?\n\nI would not. This is only going to come up where the user goes out of the way\nto use near-minimum shared_buffers.\n\n> Here's a quick prototype of this approach.\n\nThis looks fine. I'm not enthusiastic about incurring post-startup cycles to\ncater to allocating less than 512k*max_connections of shared buffers, but I\nexpect the cycles in question are negligible here.\n\n\n",
"msg_date": "Sat, 8 Apr 2023 21:29:54 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Indeed, I can't reproduce this with (our) checksums on. I also can't\nreproduce it with O_DIRECT off. I also can't reproduce it if I use\n\"mkdir pgdata && chattr +C pgdata && initdb -D pgdata\" to have a\npgdata directory with copy-on-write and (their) checksums disabled.\nBut it reproduces quite easily with COW on (default behaviour) with\nio_direct=data, debug_parallel_query=debug, create table as ...;\nupdate ...; select count(*) ...; from that test.\n\nUnfortunately my mental model of btrfs is extremely limited, basically\njust \"something a bit like ZFS\". FWIW I've been casually following\nalong with OpenZFS's ongoing O_DIRECT project, and I know that the\nplan there is to make a temporary stable copy if checksums and other\nfeatures are on (a bit like PostgreSQL does for the same reason, as\nyou reminded us). Time will tell how that works out but it *seems*\nlike all available modes would therefore work correctly for us, with\ndifferent tradeoffs (ie if you want the fastest zero-copy I/O, don't\nuse checksums, compression, etc).\n\nHere, btrfs seems to be taking a different path that I can't quite\nmake out... I see no warning/error about a checksum failure like [1],\nand we apparently managed to read something other than a mix of the\nold and new page contents (which, based on your hypothesis, should\njust leave it indeterminate whether the hint bit changes were captured\nor not, and the rest of the page should be stable, right). It's like\nthe page time-travelled or got scrambled in some other way, but it\ndidn't tell us? I'll try to dig further...\n\n[1] https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/index.php/Gotchas.html#Direct_IO_and_CRCs\n\n\n",
"msg_date": "Sun, 9 Apr 2023 16:52:10 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-08 Sa 18:50, Thomas Munro wrote:\n> On Sun, Apr 9, 2023 at 10:17 AM Andrew Dunstan<andrew@dunslane.net> wrote:\n>> I can run the test in isolation, and it's get an error reliably.\n> Random idea: it looks like you have compression enabled. What if you\n> turn it off in the directory where the test runs? Something like\n> btrfs property set <file> compression ... according to the\n> intergoogles. (I have never used btrfs before 6 minutes ago but I\n> can't seem to repro this with basic settings in a loopback btrfs\n> filesystems).\n\n\nDidn't seem to make any difference.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-08 Sa 18:50, Thomas Munro\n wrote:\n\n\nOn Sun, Apr 9, 2023 at 10:17 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nI can run the test in isolation, and it's get an error reliably.\n\n\n\nRandom idea: it looks like you have compression enabled. What if you\nturn it off in the directory where the test runs? Something like\nbtrfs property set <file> compression ... according to the\nintergoogles. (I have never used btrfs before 6 minutes ago but I\ncan't seem to repro this with basic settings in a loopback btrfs\nfilesystems).\n\n\n\nDidn't seem to make any difference.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 9 Apr 2023 07:25:13 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 4:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here, btrfs seems to be taking a different path that I can't quite\n> make out... I see no warning/error about a checksum failure like [1],\n> and we apparently managed to read something other than a mix of the\n> old and new page contents (which, based on your hypothesis, should\n> just leave it indeterminate whether the hint bit changes were captured\n> or not, and the rest of the page should be stable, right). It's like\n> the page time-travelled or got scrambled in some other way, but it\n> didn't tell us? I'll try to dig further...\n\nI think there are two separate bad phenomena.\n\n1. A concurrent modification of the user space buffer while writing\nbreaks the checksum so you can't read the data back in, as . I can\nreproduce that with a stand-alone program, attached. The \"verifier\"\nprocess occasionally reports EIO while reading, unless you comment out\nthe \"scribbler\" process's active line. The system log/dmesg gets some\nwarnings.\n\n2. The crake-style failure doesn't involve any reported checksum\nfailures or errors, and I'm not sure if another process is even\ninvolved. I attach a complete syscall trace of a repro session. (I\ntried to get strace to dump 8192 byte strings, but then it doesn't\nrepro, so we have only the start of the data transferred for each\npage.) Working back from the error message,\n\nERROR: invalid page in block 78 of relation base/5/16384,\n\nwe have a page at offset 638976, and we can find all system calls that\ntouched that offset:\n\n[pid 26031] 23:26:48.521123 pwritev(50,\n[{iov_base=\"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\niov_len=8192}], 1, 638976) = 8192\n\n[pid 26040] 23:26:48.568975 pwrite64(5,\n\"\\0\\0\\0\\0\\0Nj\\1\\0\\0\\0\\0\\240\\3\\300\\3\\0 \\4\n\\0\\0\\0\\0\\340\\2378\\0\\300\\2378\\0\"..., 8192, 638976) = 8192\n\n[pid 26040] 23:26:48.593157 pread64(6,\n\"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n8192, 638976) = 8192\n\nIn between the write of non-zeros and the read of zeros, nothing seems\nto happen that could justify that, that I can grok, but perhaps\nsomeone else will see something that I'm missing. We pretty much just\nhave the parallel worker scanning the table, and writing stuff out as\nit does it. This was obtained with:\n\nstrace -f --absolute-timestamps=time,us ~/install/bin/postgres -D\npgdata -c io_direct=data -c shared_buffers=256kB -c wal_level=minimal\n-c max_wal_senders=0 2>&1 | tee trace.log\n\nThe repro is just:\n\nset debug_parallel_query=regress;\ndrop table if exists t;\ncreate table t as select generate_series(1, 10000);\nupdate t set generate_series = 1;\nselect count(*) from t;\n\nOccasionally it fails in a different way: after create table t, later\nreferences to t can't find it in the catalogs but there is no invalid\npage error. Perhaps the freaky zeros are happening one 4k page at a\ntime but perhaps if you get two in a row it might look like an empty\ncatalog page and pass validation.",
"msg_date": "Mon, 10 Apr 2023 00:17:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 11:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> Didn't seem to make any difference.\n\nThanks for testing. I think it's COW (and I think that implies also\nchecksums?) that needs to be turned off, at least based on experiments\nhere.\n\n\n",
"msg_date": "Mon, 10 Apr 2023 00:39:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-09 Su 08:39, Thomas Munro wrote:\n> On Sun, Apr 9, 2023 at 11:25 PM Andrew Dunstan<andrew@dunslane.net> wrote:\n>> Didn't seem to make any difference.\n> Thanks for testing. I think it's COW (and I think that implies also\n> checksums?) that needs to be turned off, at least based on experiments\n> here.\n\n\n\nGoogling agrees with you about checksums. But I'm still wondering if we \nshouldn't disable COW for the build directory etc. It is suggested at [1]:\n\n\n Recommend to set nodatacow – turn cow off – for the files that\n require fast IO and tend to get very big and get alot of random\n writes: such VMDK (vm disks) files and the like.\n\n\nI'll give it a whirl.\n\n\ncheers\n\n\nandrew\n\n\n[1] <http://www.infotinks.com/btrfs-disabling-cow-file-directory-nodatacow/>\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-09 Su 08:39, Thomas Munro\n wrote:\n\n\nOn Sun, Apr 9, 2023 at 11:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nDidn't seem to make any difference.\n\n\n\nThanks for testing. I think it's COW (and I think that implies also\nchecksums?) that needs to be turned off, at least based on experiments\nhere.\n\n\n\n\n\nGoogling agrees with you about checksums. But I'm still\n wondering if we shouldn't disable COW for the build directory etc.\n It is suggested at [1]:\n\n\n\nRecommend to set nodatacow – turn cow off – for the files that\n require fast IO and tend to get very big and get alot of random\n writes: such VMDK (vm disks) files and the like. \n\n\n\n\nI'll give it a whirl.\n\n\n\ncheers\n\n\nandrew\n\n\n\n[1]\n<http://www.infotinks.com/btrfs-disabling-cow-file-directory-nodatacow/>\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 9 Apr 2023 09:14:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-09 Su 09:14, Andrew Dunstan wrote:\n>\n>\n> On 2023-04-09 Su 08:39, Thomas Munro wrote:\n>> On Sun, Apr 9, 2023 at 11:25 PM Andrew Dunstan<andrew@dunslane.net> wrote:\n>>> Didn't seem to make any difference.\n>> Thanks for testing. I think it's COW (and I think that implies also\n>> checksums?) that needs to be turned off, at least based on experiments\n>> here.\n>\n>\n>\n> Googling agrees with you about checksums. But I'm still wondering if \n> we shouldn't disable COW for the build directory etc. It is suggested \n> at [1]:\n>\n>\n> Recommend to set nodatacow – turn cow off – for the files that\n> require fast IO and tend to get very big and get alot of random\n> writes: such VMDK (vm disks) files and the like.\n>\n>\n> I'll give it a whirl.\n>\n>\n\nwith COW disabled, I can no longer generate a failure with the test.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-09 Su 09:14, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-04-09 Su 08:39, Thomas Munro\n wrote:\n\n\nOn Sun, Apr 9, 2023 at 11:25 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nDidn't seem to make any difference.\n\n\nThanks for testing. I think it's COW (and I think that implies also\nchecksums?) that needs to be turned off, at least based on experiments\nhere.\n\n\n\n\n\nGoogling agrees with you about checksums. But I'm still\n wondering if we shouldn't disable COW for the build directory\n etc. It is suggested at [1]:\n\n\n\nRecommend to set nodatacow – turn cow off – for the files\n that require fast IO and tend to get very big and get alot of\n random writes: such VMDK (vm disks) files and the like. \n\n\n\n\nI'll give it a whirl.\n\n\n\n\n\n\nwith COW disabled, I can no longer generate a failure with the\n test.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 9 Apr 2023 12:35:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> we have a page at offset 638976, and we can find all system calls that\n> touched that offset:\n\n> [pid 26031] 23:26:48.521123 pwritev(50,\n> [{iov_base=\"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> iov_len=8192}], 1, 638976) = 8192\n\n> [pid 26040] 23:26:48.568975 pwrite64(5,\n> \"\\0\\0\\0\\0\\0Nj\\1\\0\\0\\0\\0\\240\\3\\300\\3\\0 \\4\n> \\0\\0\\0\\0\\340\\2378\\0\\300\\2378\\0\"..., 8192, 638976) = 8192\n\n> [pid 26040] 23:26:48.593157 pread64(6,\n> \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> 8192, 638976) = 8192\n\nBoy, it's hard to look at that trace and not call it a filesystem bug.\nGiven the apparent dependency on COW, I wonder if this has something\nto do with getting confused about which copy is current?\n\nAnother thing that struck me is that the two calls from pid 26040\nare issued on different FDs. I checked the strace log and verified\nthat these do both refer to \"base/5/16384\". It looks like there was\na cache flush at about 23:26:48.575023 that caused 26040 to close\nand later reopen all its database relation FDs. Maybe that is\nsomehow contributing to the filesystem's confusion? And more to the\npoint, could that explain why other O_DIRECT users aren't up in arms\nover this bug? Maybe they don't switch FDs as readily as we do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Apr 2023 16:43:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-08 21:29:54 -0700, Noah Misch wrote:\n> On Sat, Apr 08, 2023 at 11:08:16AM -0700, Andres Freund wrote:\n> > On 2023-04-07 23:04:08 -0700, Andres Freund wrote:\n> > > There were some failures in CI (e.g. [1] (and perhaps also bf, didn't yet\n> > > check), about \"no unpinned buffers available\". I was worried for a moment\n> > > that this could actually be relation to the bulk extension patch.\n> > > \n> > > But it looks like it's older - and not caused by direct_io support (except by\n> > > way of the test existing). I reproduced the issue locally by setting s_b even\n> > > lower, to 16 and made the ERROR a PANIC.\n> > >\n> > > [backtrace]\n> \n> I get an ERROR, not a PANIC:\n\nWhat I meant is that I changed the code to use PANIC, to make it easier to get\na backtrace.\n\n\n> > > If you look at log_newpage_range(), it's not surprising that we get this error\n> > > - it pins up to 32 buffers at once.\n> > > \n> > > Afaics log_newpage_range() originates in 9155580fd5fc, but this caller is from\n> > > c6b92041d385.\n> \n> > > Do we care about fixing this in the backbranches? Probably not, given there\n> > > haven't been user complaints?\n> \n> I would not. This is only going to come up where the user goes out of the way\n> to use near-minimum shared_buffers.\n\nIt's not *just* that scenario. With a few concurrent connections you can get\ninto problematic territory even with halfway reasonable shared buffers.\n\n\n> > Here's a quick prototype of this approach.\n> \n> This looks fine. I'm not enthusiastic about incurring post-startup cycles to\n> cater to allocating less than 512k*max_connections of shared buffers, but I\n> expect the cycles in question are negligible here.\n\nYea, I can't imagine it'd matter, compared to the other costs. Arguably it'd\nallow us to crank up the maximum batch size further, even.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Apr 2023 14:45:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 09, 2023 at 02:45:16PM -0700, Andres Freund wrote:\n> On 2023-04-08 21:29:54 -0700, Noah Misch wrote:\n> > On Sat, Apr 08, 2023 at 11:08:16AM -0700, Andres Freund wrote:\n> > > On 2023-04-07 23:04:08 -0700, Andres Freund wrote:\n> > > > If you look at log_newpage_range(), it's not surprising that we get this error\n> > > > - it pins up to 32 buffers at once.\n> > > > \n> > > > Afaics log_newpage_range() originates in 9155580fd5fc, but this caller is from\n> > > > c6b92041d385.\n> > \n> > > > Do we care about fixing this in the backbranches? Probably not, given there\n> > > > haven't been user complaints?\n> > \n> > I would not. This is only going to come up where the user goes out of the way\n> > to use near-minimum shared_buffers.\n> \n> It's not *just* that scenario. With a few concurrent connections you can get\n> into problematic territory even with halfway reasonable shared buffers.\n\nI am not familiar with such cases. You could get there with 64MB shared\nbuffers and 256 simultaneous commits of new-refilenode-creating transactions,\nbut I'd still file that under going out of one's way to use tiny shared\nbuffers relative to the write activity. What combination did you envision?\n\n\n",
"msg_date": "Sun, 9 Apr 2023 16:40:54 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 8:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Boy, it's hard to look at that trace and not call it a filesystem bug.\n\nAgreed.\n\n> Given the apparent dependency on COW, I wonder if this has something\n> to do with getting confused about which copy is current?\n\nYeah, I suppose it would require bogus old page versions (or I guess\nalternatively completely mixed up page offsets) rather than bogus\nzeroed pages to explain the too-high count observed in one of crake's\nfailed runs: I guess it counted some pre-updated tuples that were\nsupposed to be deleted and then counted the post-updated tuples on\nlater pages (insert joke about the Easter variant of the Halloween\nproblem). It's just that in the runs I've managed to observe and\nanalyse, the previous version always happened to be zeros.\n\n\n",
"msg_date": "Mon, 10 Apr 2023 11:46:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-10 00:17:12 +1200, Thomas Munro wrote:\n> I think there are two separate bad phenomena.\n> \n> 1. A concurrent modification of the user space buffer while writing\n> breaks the checksum so you can't read the data back in, as . I can\n> reproduce that with a stand-alone program, attached. The \"verifier\"\n> process occasionally reports EIO while reading, unless you comment out\n> the \"scribbler\" process's active line. The system log/dmesg gets some\n> warnings.\n\nI think we really need to think about whether we eventually we want to do\nsomething to avoid modifying pages while IO is in progress. The only\nalternative is for filesystems to make copies of everything in the IO path,\nwhich is far from free (and obviously prevents from using DMA for the whole\nIO). The copy we do to avoid the same problem when checksums are enabled,\nshows up quite prominently in write-heavy profiles, so there's a \"purely\npostgres\" reason to avoid these issues too.\n\n\n> 2. The crake-style failure doesn't involve any reported checksum\n> failures or errors, and I'm not sure if another process is even\n> involved. I attach a complete syscall trace of a repro session. (I\n> tried to get strace to dump 8192 byte strings, but then it doesn't\n> repro, so we have only the start of the data transferred for each\n> page.) Working back from the error message,\n> \n> ERROR: invalid page in block 78 of relation base/5/16384,\n> \n> we have a page at offset 638976, and we can find all system calls that\n> touched that offset:\n> \n> [pid 26031] 23:26:48.521123 pwritev(50,\n> [{iov_base=\"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> iov_len=8192}], 1, 638976) = 8192\n> \n> [pid 26040] 23:26:48.568975 pwrite64(5,\n> \"\\0\\0\\0\\0\\0Nj\\1\\0\\0\\0\\0\\240\\3\\300\\3\\0 \\4\n> \\0\\0\\0\\0\\340\\2378\\0\\300\\2378\\0\"..., 8192, 638976) = 8192\n> \n> [pid 26040] 23:26:48.593157 pread64(6,\n> \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> 8192, 638976) = 8192\n> \n> In between the write of non-zeros and the read of zeros, nothing seems\n> to happen that could justify that, that I can grok, but perhaps\n> someone else will see something that I'm missing. We pretty much just\n> have the parallel worker scanning the table, and writing stuff out as\n> it does it. This was obtained with:\n\nHave you tried to write a reproducer for this that doesn't involve postgres?\nIt'd certainly be interesting to know the precise conditions for this. E.g.,\ncan this also happen without O_DIRECT, if cache pressure is high enough for\nthe page to get evicted soon after (potentially simulated with fadvise or\nsuch)?\n\nWe should definitely let the brtfs folks know of this issue... It's possible\nthat this bug was recently introduced even. What kernel version did you repro\nthis on Thomas?\n\nI wonder if we should have a postgres-io-torture program in our tree for some\nof these things. We've found issues with our assumptions on several operating\nsystems and filesystems, without systematically looking. Or even stressing IO\nall that hard in our tests.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Apr 2023 19:57:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Il giorno lun 10 apr 2023 alle ore 04:58 Andres Freund\n<andres@anarazel.de> ha scritto:\n> We should definitely let the brtfs folks know of this issue... It's possible\n> that this bug was recently introduced even. What kernel version did you repro\n> this on Thomas?\n\nIn these days on BTRFS ml they are discussing about Direct I/O data\ncorruption. No patch at the moment, they are still discussing how to\naddress it:\nhttps://lore.kernel.org/linux-btrfs/aa1fb69e-b613-47aa-a99e-a0a2c9ed273f@app.fastmail.com/\n\nCiao,\nGelma\n\n\n",
"msg_date": "Mon, 10 Apr 2023 09:04:22 +0200",
"msg_from": "Andrea Gelmini <andrea.gelmini@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> Have you tried to write a reproducer for this that doesn't involve postgres?\n\nI tried a bit. I'll try harder soon.\n\n> ... What kernel version did you repro\n> this on Thomas?\n\nDebian's 6.0.10-2 kernel (Debian 12 on a random laptop). Here's how I\nset up a test btrfs in case someone else wants a head start:\n\ntruncate -s2G 2GB.img\nsudo losetup --show --find 2GB.img\nsudo mkfs -t btrfs /dev/loop0 # the device name shown by losetup\nsudo mkdir /mnt/tmp\nsudo mount /dev/loop0 /mnt/tmp\nsudo chown $(whoami) /mnt/tmp\n\ncd /mnt/tmp\n/path/to/initdb -D pgdata\n... (see instructions further up for postgres command line + queries to run)\n\n\n",
"msg_date": "Mon, 10 Apr 2023 19:27:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 7:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Debian's 6.0.10-2 kernel (Debian 12 on a random laptop).\n\nRealising I hadn't updated for a bit, I did so and it still reproduces on:\n\n$ uname -a\nLinux x1 6.1.0-7-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.20-1\n(2023-03-19) x86_64 GNU/Linux\n\n\n",
"msg_date": "Mon, 10 Apr 2023 19:40:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-10 19:27:27 +1200, Thomas Munro wrote:\n> On Mon, Apr 10, 2023 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > Have you tried to write a reproducer for this that doesn't involve postgres?\n> \n> I tried a bit. I'll try harder soon.\n> \n> > ... What kernel version did you repro\n> > this on Thomas?\n> \n> Debian's 6.0.10-2 kernel (Debian 12 on a random laptop). Here's how I\n> set up a test btrfs in case someone else wants a head start:\n> \n> truncate -s2G 2GB.img\n> sudo losetup --show --find 2GB.img\n> sudo mkfs -t btrfs /dev/loop0 # the device name shown by losetup\n> sudo mkdir /mnt/tmp\n> sudo mount /dev/loop0 /mnt/tmp\n> sudo chown $(whoami) /mnt/tmp\n> \n> cd /mnt/tmp\n> /path/to/initdb -D pgdata\n> ... (see instructions further up for postgres command line + queries to run)\n\nI initially failed to repro the issue with these instructions. Turns out that\nthe problem does not happen if huge pages are in used - I'd configured huge\npages, so the default huge_pages=try succeeded. As soon as I disable\nhuge_pages explicitly, it reproduces.\n\nAnother interesting bit is that if checksums are enabled, I also can't\nreproduce the issue. Together with the huge_page issue, it does suggest that\nthis is somehow related to page faults. Which fits with the thread Andrea\nreferenced...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Apr 2023 18:55:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-10 18:55:26 -0700, Andres Freund wrote:\n> On 2023-04-10 19:27:27 +1200, Thomas Munro wrote:\n> > On Mon, Apr 10, 2023 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Have you tried to write a reproducer for this that doesn't involve postgres?\n> > \n> > I tried a bit. I'll try harder soon.\n> > \n> > > ... What kernel version did you repro\n> > > this on Thomas?\n> > \n> > Debian's 6.0.10-2 kernel (Debian 12 on a random laptop). Here's how I\n> > set up a test btrfs in case someone else wants a head start:\n> > \n> > truncate -s2G 2GB.img\n> > sudo losetup --show --find 2GB.img\n> > sudo mkfs -t btrfs /dev/loop0 # the device name shown by losetup\n> > sudo mkdir /mnt/tmp\n> > sudo mount /dev/loop0 /mnt/tmp\n> > sudo chown $(whoami) /mnt/tmp\n> > \n> > cd /mnt/tmp\n> > /path/to/initdb -D pgdata\n> > ... (see instructions further up for postgres command line + queries to run)\n> \n> I initially failed to repro the issue with these instructions. Turns out that\n> the problem does not happen if huge pages are in used - I'd configured huge\n> pages, so the default huge_pages=try succeeded. As soon as I disable\n> huge_pages explicitly, it reproduces.\n> \n> Another interesting bit is that if checksums are enabled, I also can't\n> reproduce the issue. Together with the huge_page issue, it does suggest that\n> this is somehow related to page faults. Which fits with the thread Andrea\n> referenced...\n\nThe last iteration of the fix that I could find is:\nhttps://lore.kernel.org/linux-btrfs/20230328051957.1161316-1-hch@lst.de/T/#m1afdc3fe77e10a97302e7d80fed3efeaa297f0f7\n\nAnd the fix has been merged into\nhttps://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git/log/?h=for-next\n\nI think that means it'll have to wait for 6.4 development to open (in a few\nweeks), and then will be merged into the stable branches from there.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Apr 2023 19:15:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 2:15 PM Andres Freund <andres@anarazel.de> wrote:\n> And the fix has been merged into\n> https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git/log/?h=for-next\n>\n> I think that means it'll have to wait for 6.4 development to open (in a few\n> weeks), and then will be merged into the stable branches from there.\n\nGreat! Let's hope/assume for now that that'll fix phenomenon #2.\nThat still leaves the checksum-vs-concurrent-modification thing that I\ncalled phenomenon #1, which we've not actually hit with PostgreSQL yet\nbut is clearly possible and can be seen with the stand-alone\nrepro-program I posted upthread. You wrote:\n\nOn Mon, Apr 10, 2023 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> I think we really need to think about whether we eventually we want to do\n> something to avoid modifying pages while IO is in progress. The only\n> alternative is for filesystems to make copies of everything in the IO path,\n> which is far from free (and obviously prevents from using DMA for the whole\n> IO). The copy we do to avoid the same problem when checksums are enabled,\n> shows up quite prominently in write-heavy profiles, so there's a \"purely\n> postgres\" reason to avoid these issues too.\n\n+1\n\nI wonder what the other file systems that maintain checksums (see list\nat [1]) do when the data changes underneath a write. ZFS's policy is\nconservative[2], while BTRFS took the demons-will-fly-out-of-your-nose\nroute. I can see arguments for both approaches (ZFS can only reach\nzero-copy optimum by turning off checksums completely, while BTRFS is\nhappy to assume that if you break this programming rule that is not\nwritten down anywhere then you must never want to see your data ever\nagain). What about ReFS? CephFS?\n\nI tried to find out what POSIX says about this WRT synchronous\npwrite() (as Tom suggested, maybe we're doing something POSIX doesn't\nallow), but couldn't find it in my first attempt. It *does* say it's\nundefined for aio_write() (which means that my prototype\nio_method=posix_aio code that uses that stuff is undefined in presense\nof hintbit modifications). I don't really see why it should vary\nbetween synchronous and asynchronous interfaces (considering the\nexistence of threads, shared memory etc, the synchronous interface\nonly removes one thread from list of possible suspects that could flip\nsome bits).\n\nBut yeah, in any case, it doesn't seem great that we do that.\n\n[1] https://en.wikipedia.org/wiki/Comparison_of_file_systems#Block_capabilities\n[2] https://openzfs.topicbox.com/groups/developer/T950b02acdf392290/odirect-semantics-in-zfs\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:31:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 2:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I tried to find out what POSIX says about this\n\n(But of course whatever it might say is of especially limited value\nwhen O_DIRECT is in the picture, being completely unstandardised.\nReally I guess all they meant was \"if you *copy* something that's\nmoving, who knows which bits you'll copy\"... not \"your data might be\nincinerated with lasers\".)\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:58:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-09 16:40:54 -0700, Noah Misch wrote:\n> On Sun, Apr 09, 2023 at 02:45:16PM -0700, Andres Freund wrote:\n> > It's not *just* that scenario. With a few concurrent connections you can get\n> > into problematic territory even with halfway reasonable shared buffers.\n>\n> I am not familiar with such cases. You could get there with 64MB shared\n> buffers and 256 simultaneous commits of new-refilenode-creating transactions,\n> but I'd still file that under going out of one's way to use tiny shared\n> buffers relative to the write activity. What combination did you envision?\n\nI'd not say it's common, but it's less crazy than running with 128kB of s_b...\n\nThere's also the issue that log_newpage_range() is used in number of places\nwhere we could have a lot of pre-existing buffer pins. So pinning another 64\nbuffers could tip us over.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 10:53:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nI'm hitting a panic in t_004_io_direct. The build is running on\noverlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\ncombination but has worked well for building everything over the last\ndecade. On Debian unstable:\n\nPANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n\n16:21:16 Bailout called. Further testing stopped: pg_ctl start failed\n16:21:16 t/004_io_direct.pl ..............\n16:21:16 Dubious, test returned 255 (wstat 65280, 0xff00)\n16:21:16 No subtests run\n16:21:16\n16:21:16 Test Summary Report\n16:21:16 -------------------\n16:21:16 t/004_io_direct.pl (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)\n16:21:16 Non-zero exit status: 255\n16:21:16 Parse errors: No plan found in TAP output\n16:21:16 Files=4, Tests=65, 9 wallclock secs ( 0.03 usr 0.02 sys + 3.78 cusr 1.48 csys = 5.31 CPU)\n16:21:16 Result: FAIL\n\n16:21:16 ******** build/src/test/modules/test_misc/tmp_check/log/004_io_direct_main.log ********\n16:21:16 2023-04-11 23:21:16.431 UTC [25991] LOG: starting PostgreSQL 16devel (Debian 16~~devel-1.pgdg+~20230411.2256.gc03c2ea) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit\n16:21:16 2023-04-11 23:21:16.431 UTC [25991] LOG: listening on Unix socket \"/tmp/s0C4hWQq82/.s.PGSQL.54693\"\n16:21:16 2023-04-11 23:21:16.433 UTC [25994] LOG: database system was shut down at 2023-04-11 23:21:16 UTC\n16:21:16 2023-04-11 23:21:16.434 UTC [25994] PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n16:21:16 2023-04-11 23:21:16.525 UTC [25991] LOG: startup process (PID 25994) was terminated by signal 6: Aborted\n16:21:16 2023-04-11 23:21:16.525 UTC [25991] LOG: aborting startup due to startup process failure\n16:21:16 2023-04-11 23:21:16.526 UTC [25991] LOG: database system is shut down\n\n16:21:16 ******** build/src/test/modules/test_misc/tmp_check/t_004_io_direct_main_data/pgdata/core ********\n16:21:17\n16:21:17 warning: Can't open file /dev/shm/PostgreSQL.3457641370 during file-backed mapping note processing\n16:21:17\n16:21:17 warning: Can't open file /dev/shm/PostgreSQL.2391834648 during file-backed mapping note processing\n16:21:17\n16:21:17 warning: Can't open file /dev/zero (deleted) during file-backed mapping note processing\n16:21:17\n16:21:17 warning: Can't open file /SYSV00000dea (deleted) during file-backed mapping note processing\n16:21:17 [New LWP 25994]\n16:21:17 [Thread debugging using libthread_db enabled]\n16:21:17 Using host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\n16:21:17 Core was generated by `postgres: main: startup '.\n16:21:17 Program terminated with signal SIGABRT, Aborted.\n16:21:17 #0 0x00007f176c591ccc in ?? () from /lib/x86_64-linux-gnu/libc.so.6\n16:21:17 #0 0x00007f176c591ccc in ?? () from /lib/x86_64-linux-gnu/libc.so.6\n16:21:17 No symbol table info available.\n16:21:17 #1 0x00007f176c542ef2 in raise () from /lib/x86_64-linux-gnu/libc.so.6\n16:21:17 No symbol table info available.\n16:21:17 #2 0x00007f176c52d472 in abort () from /lib/x86_64-linux-gnu/libc.so.6\n16:21:17 No symbol table info available.\n16:21:17 #3 0x000055a7ba7978a1 in errfinish (filename=<optimized out>, lineno=<optimized out>, funcname=0x55a7ba810560 <__func__.47> \"XLogFileInitInternal\") at ./build/../src/backend/utils/error/elog.c:604\n16:21:17 edata = 0x55a7baae3e20 <errordata>\n16:21:17 elevel = 23\n16:21:17 oldcontext = 0x55a7bb471590\n16:21:17 econtext = 0x0\n16:21:17 __func__ = \"errfinish\"\n16:21:17 #4 0x000055a7ba21759c in XLogFileInitInternal (logsegno=1, logtli=logtli@entry=1, added=added@entry=0x7ffebc6c8a3f, path=path@entry=0x7ffebc6c8a40 \"pg_wal/00000001\", '0' <repeats 15 times>, \"1\") at ./build/../src/backend/access/transam/xlog.c:2944\n16:21:17 __errno_location = <optimized out>\n16:21:17 tmppath = \"0\\214l\\274\\376\\177\\000\\000\\321\\330~\\272\\247U\\000\\000\\005Q\\223\\272\\247U\\000\\000p\\214l\\274\\376\\177\\000\\000`\\214l\\274\\376\\177\\000\\000\\212\\335~\\000\\v\", '\\000' <repeats 31 times>, \"\\247U\\000\\000\\000\\000\\000\\000\\000\\177\\000\\000*O\\202\\272\\247U\\000\\000\\254\\206l\\274\\376\\177\\000\\000\\000\\000\\000\\000\\v\", '\\000' <repeats 23 times>, \"0\\000\\000\\000\\000\\000\\000\\000\\247U\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\001Q\\223\\272\\247U\\000\\000\\240\\215l\\274\\376\\177\\000\\000\\376\\377\\377\\377\\000\\000\\000\\0000\\207l\\274\\376\\177\\000\\000[\\326~\\272\\247U\\000\\0000\\207l\\274\\376\\177\\000\\000\"...\n16:21:17 installed_segno = 0\n16:21:17 max_segno = <optimized out>\n16:21:17 fd = <optimized out>\n16:21:17 save_errno = <optimized out>\n16:21:17 open_flags = 194\n16:21:17 __func__ = \"XLogFileInitInternal\"\n16:21:17 #5 0x000055a7ba35a1d5 in XLogFileInit (logsegno=<optimized out>, logtli=logtli@entry=1) at ./build/../src/backend/access/transam/xlog.c:3099\n16:21:17 ignore_added = false\n16:21:17 path = \"pg_wal/00000001\", '0' <repeats 15 times>, \"1\\000\\220\\312P\\273\\247U\\000\\000/\\375Yl\\027\\177\\000\\000\\220\\252P\\273\\247U\\000\\000\\001\", '\\000' <repeats 15 times>, \"\\220\\252P\\273\\247U\\000\\000\\300\\212l\\274\\376\\177\\000\\000\\002\\261{\\272\\247U\\000\\000\\220\\252P\\273\\247U\\000\\000\\220\\252P\\273\\247U\\000\\000\\001\", '\\000' <repeats 15 times>, \"\\340\\212l\\274\\376\\177\\000\\000\\021\\032|\\272\\247U\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\240\\312P\\273\\247U\\000\\0000\\213l\\274\\376\\177\\000\\000\\350\\262{\\272\\247U\\000\\000\\001\", '\\000' <repeats 16 times>, \"\\256\\023i\\027\\177\\000\\000\"...\n16:21:17 fd = <optimized out>\n16:21:17 __func__ = \"XLogFileInit\"\n16:21:17 #6 0x000055a7ba35bab3 in XLogWrite (WriteRqst=..., tli=tli@entry=1, flexible=flexible@entry=false) at ./build/../src/backend/access/transam/xlog.c:2137\n16:21:17 EndPtr = 21954560\n16:21:17 ispartialpage = true\n16:21:17 last_iteration = <optimized out>\n16:21:17 finishing_seg = <optimized out>\n16:21:17 curridx = 7\n16:21:17 npages = 0\n16:21:17 startidx = 0\n16:21:17 startoffset = 0\n16:21:17 __func__ = \"XLogWrite\"\n16:21:17 #7 0x000055a7ba35c8e0 in XLogFlush (record=21949600) at ./build/../src/backend/access/transam/xlog.c:2638\n16:21:17 insertpos = 21949600\n16:21:17 WriteRqstPtr = 21949600\n16:21:17 WriteRqst = <optimized out>\n16:21:17 insertTLI = 1\n16:21:17 __func__ = \"XLogFlush\"\n16:21:17 #8 0x000055a7ba36118e in XLogReportParameters () at ./build/../src/backend/access/transam/xlog.c:7620\n16:21:17 xlrec = {MaxConnections = 100, max_worker_processes = 8, max_wal_senders = 0, max_prepared_xacts = 0, max_locks_per_xact = 64, wal_level = 1, wal_log_hints = false, track_commit_timestamp = false}\n16:21:17 recptr = <optimized out>\n16:21:17 #9 StartupXLOG () at ./build/../src/backend/access/transam/xlog.c:5726\n16:21:17 Insert = <optimized out>\n16:21:17 checkPoint = <optimized out>\n16:21:17 wasShutdown = true\n16:21:17 didCrash = <optimized out>\n16:21:17 haveTblspcMap = false\n16:21:17 haveBackupLabel = false\n16:21:17 EndOfLog = 21949544\n16:21:17 EndOfLogTLI = <optimized out>\n16:21:17 newTLI = 1\n16:21:17 performedWalRecovery = <optimized out>\n16:21:17 endOfRecoveryInfo = <optimized out>\n16:21:17 abortedRecPtr = <optimized out>\n16:21:17 missingContrecPtr = 0\n16:21:17 oldestActiveXID = <optimized out>\n16:21:17 promoted = false\n16:21:17 __func__ = \"StartupXLOG\"\n16:21:17 #10 0x000055a7ba5b4d00 in StartupProcessMain () at ./build/../src/backend/postmaster/startup.c:267\n16:21:17 No locals.\n16:21:17 #11 0x000055a7ba5ab0cf in AuxiliaryProcessMain (auxtype=auxtype@entry=StartupProcess) at ./build/../src/backend/postmaster/auxprocess.c:141\n16:21:17 __func__ = \"AuxiliaryProcessMain\"\n16:21:17 #12 0x000055a7ba5b0aa3 in StartChildProcess (type=StartupProcess) at ./build/../src/backend/postmaster/postmaster.c:5369\n16:21:17 pid = <optimized out>\n16:21:17 __func__ = \"StartChildProcess\"\n16:21:17 save_errno = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 __errno_location = <optimized out>\n16:21:17 #13 0x000055a7ba5b45d6 in PostmasterMain (argc=argc@entry=4, argv=argv@entry=0x55a7bb471450) at ./build/../src/backend/postmaster/postmaster.c:1455\n16:21:17 opt = <optimized out>\n16:21:17 status = <optimized out>\n16:21:17 userDoption = <optimized out>\n16:21:17 listen_addr_saved = <optimized out>\n16:21:17 i = <optimized out>\n16:21:17 output_config_variable = <optimized out>\n16:21:17 __func__ = \"PostmasterMain\"\n16:21:17 #14 0x000055a7ba29fd62 in main (argc=4, argv=0x55a7bb471450) at ./build/../src/backend/main/main.c:200\n16:21:17 do_check_root = <optimized out>\n\nApologies if this was already reported elsewhere in the thread, I\nskimmed it but the problems looked different.\n\nChristoph\n\n\n",
"msg_date": "Tue, 11 Apr 2023 19:56:32 -0700",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg <myon@debian.org> wrote:\n> I'm hitting a panic in t_004_io_direct. The build is running on\n> overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n> combination but has worked well for building everything over the last\n> decade. On Debian unstable:\n>\n> PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n\nHi Christoph,\n\nThat's an interesting one. I was half expecting to see that on some\nunusual systems, which is why I made the test check which OS it is and\nexclude those that are known to fail with EINVAL or ENOTSUPP on their\ncommon/typical file systems. But if it's going to be Linux, that's\nnot going to work. I have a new idea: perhaps it is possible to try\nto open a file with O_DIRECT from perl, and if it fails like that,\nskip the test. Looking into that now.\n\n\n",
"msg_date": "Wed, 12 Apr 2023 15:04:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg <myon@debian.org> wrote:\n> > I'm hitting a panic in t_004_io_direct. The build is running on\n> > overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n> > combination but has worked well for building everything over the last\n> > decade. On Debian unstable:\n> >\n> > PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n\n> ... I have a new idea: perhaps it is possible to try\n> to open a file with O_DIRECT from perl, and if it fails like that,\n> skip the test. Looking into that now.\n\nI think I have that working OK. Any Perl hackers want to comment on\nmy use of IO::File (copied from examples on the internet that showed\nhow to use O_DIRECT)? I am not much of a perl hacker but according to\nmy package manager, IO/File.pm came with perl itself. And the Fcntl\neval trick that I copied from File::stat, and the perl-critic\nsuppression that requires?\n\nI tested this on OpenBSD, which has no O_DIRECT, so that tests the\nfirst reason to skip.\n\nDoes it skip OK on your system, for the second reason? Should we be\nmore specific about the errno?\n\nAs far as I know, the only systems on the build farm that should be\naffected by this change are the illumos boxen (they have O_DIRECT,\nunlike Solaris, but perl's $^O couldn't tell the difference between\nSolaris and illumos, so they didn't previously run this test).\n\nOne thing I resisted the urge to do is invent PG_TEST_SKIP, a sort of\nanti-PG_TEST_EXTRA. I think I'd rather hear about it if there is a\nsystem out there that passes the pre-flight check, but fails later on,\nbecause we'd better investigate why. That's basically the point of\nshipping this \"developer only\" feature long before serious use of it.",
"msg_date": "Wed, 12 Apr 2023 17:48:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 5:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg <myon@debian.org> wrote:\n> > > I'm hitting a panic in t_004_io_direct. The build is running on\n> > > overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n> > > combination but has worked well for building everything over the last\n> > > decade. On Debian unstable:\n\nAfter trying a couple of things and doing some googling, it looks like\nit's tmpfs that rejects it, not overlayfs, so I'd adjust that commit\nmessage slightly. Of course it's a completely reasonable thing to\nexpect the tests to pass (or in this case be skipped) in a tmpfs, eg\n/tmp on some distributions. (It's a strange to contemplate what\nO_DIRECT means for tmpfs, considering that it *is* the page cache,\nkinda, and I see people have been arguing about that for a couple of\ndecades since O_DIRECT was added to Linux; doesn't seem that helpful\nto me that it rejects it, but 🤷).\n\n\n",
"msg_date": "Wed, 12 Apr 2023 19:37:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-12 We 01:48, Thomas Munro wrote:\n> On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n>> On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n>>> I'm hitting a panic in t_004_io_direct. The build is running on\n>>> overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n>>> combination but has worked well for building everything over the last\n>>> decade. On Debian unstable:\n>>>\n>>> PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n>> ... I have a new idea: perhaps it is possible to try\n>> to open a file with O_DIRECT from perl, and if it fails like that,\n>> skip the test. Looking into that now.\n> I think I have that working OK. Any Perl hackers want to comment on\n> my use of IO::File (copied from examples on the internet that showed\n> how to use O_DIRECT)? I am not much of a perl hacker but according to\n> my package manager, IO/File.pm came with perl itself. And the Fcntl\n> eval trick that I copied from File::stat, and the perl-critic\n> suppression that requires?\n\n\nI think you can probably replace a lot of the magic here by simply saying\n\n\nif (Fcntl->can(\"O_DIRECT\")) ...\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-12 We 01:48, Thomas Munro\n wrote:\n\n\nOn Wed, Apr 12, 2023 at 3:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n\nOn Wed, Apr 12, 2023 at 2:56 PM Christoph Berg <myon@debian.org> wrote:\n\n\nI'm hitting a panic in t_004_io_direct. The build is running on\noverlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\ncombination but has worked well for building everything over the last\ndecade. On Debian unstable:\n\nPANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n\n\n\n\n\n\n... I have a new idea: perhaps it is possible to try\nto open a file with O_DIRECT from perl, and if it fails like that,\nskip the test. Looking into that now.\n\n\n\nI think I have that working OK. Any Perl hackers want to comment on\nmy use of IO::File (copied from examples on the internet that showed\nhow to use O_DIRECT)? I am not much of a perl hacker but according to\nmy package manager, IO/File.pm came with perl itself. And the Fcntl\neval trick that I copied from File::stat, and the perl-critic\nsuppression that requires?\n\n\n\nI think you can probably replace a lot of the magic here by\n simply saying\n\n\nif (Fcntl->can(\"O_DIRECT\")) ...\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 12 Apr 2023 09:08:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> I think I have that working OK. Any Perl hackers want to comment on\n> my use of IO::File (copied from examples on the internet that showed\n> how to use O_DIRECT)? I am not much of a perl hacker but according to\n> my package manager, IO/File.pm came with perl itself.\n\nIndeed, and it has been since perl 5.003_07, released in 1996. And Fcntl\nhas known about O_DIRECT since perl 5.6.0, released in 2000, so we can\nsafely use both.\n\n> And the Fcntl eval trick that I copied from File::stat, and the\n> perl-critic suppression that requires?\n[…]\n> +\tno strict 'refs'; ## no critic (ProhibitNoStrict)\n> +\tmy $val = eval { &{'Fcntl::O_DIRECT'} };\n> +\tif (defined $val)\n\nThis trick is only needed in File::stat because it's constructing the\nsymbol name dynamically. And because Fcntl by default exports all the\nO_* and F_* constants it knows about, we can simply do:\n\n \tif (defined &O_DIRECT)\n> +\t{\n> +\t\tuse Fcntl qw(O_DIRECT);\n\nThe `use Fcntl;` above will already have imported this, so this is\nredundant.\n\n- ilmari\n\n\n",
"msg_date": "Wed, 12 Apr 2023 14:31:26 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On 2023-04-12 We 01:48, Thomas Munro wrote:\n>> On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n>>> On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n>>>> I'm hitting a panic in t_004_io_direct. The build is running on\n>>>> overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n>>>> combination but has worked well for building everything over the last\n>>>> decade. On Debian unstable:\n>>>>\n>>>> PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n>>> ... I have a new idea: perhaps it is possible to try\n>>> to open a file with O_DIRECT from perl, and if it fails like that,\n>>> skip the test. Looking into that now.\n>> I think I have that working OK. Any Perl hackers want to comment on\n>> my use of IO::File (copied from examples on the internet that showed\n>> how to use O_DIRECT)? I am not much of a perl hacker but according to\n>> my package manager, IO/File.pm came with perl itself. And the Fcntl\n>> eval trick that I copied from File::stat, and the perl-critic\n>> suppression that requires?\n>\n>\n> I think you can probably replace a lot of the magic here by simply saying\n>\n>\n> if (Fcntl->can(\"O_DIRECT\")) ...\n\nFcntl->can() is true for all constants that Fcntl knows about, whether\nor not they are defined for your OS. `defined &O_DIRECT` is the simplest\ncheck, see my other reply to Thomas.\n\n> cheers\n>\n>\n> andrew\n\n- ilmari\n\n\n",
"msg_date": "Wed, 12 Apr 2023 15:23:58 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-12 We 10:23, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>\n>> On 2023-04-12 We 01:48, Thomas Munro wrote:\n>>> On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n>>>> On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n>>>>> I'm hitting a panic in t_004_io_direct. The build is running on\n>>>>> overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n>>>>> combination but has worked well for building everything over the last\n>>>>> decade. On Debian unstable:\n>>>>>\n>>>>> PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n>>>> ... I have a new idea: perhaps it is possible to try\n>>>> to open a file with O_DIRECT from perl, and if it fails like that,\n>>>> skip the test. Looking into that now.\n>>> I think I have that working OK. Any Perl hackers want to comment on\n>>> my use of IO::File (copied from examples on the internet that showed\n>>> how to use O_DIRECT)? I am not much of a perl hacker but according to\n>>> my package manager, IO/File.pm came with perl itself. And the Fcntl\n>>> eval trick that I copied from File::stat, and the perl-critic\n>>> suppression that requires?\n>>\n>> I think you can probably replace a lot of the magic here by simply saying\n>>\n>>\n>> if (Fcntl->can(\"O_DIRECT\")) ...\n> Fcntl->can() is true for all constants that Fcntl knows about, whether\n> or not they are defined for your OS. `defined &O_DIRECT` is the simplest\n> check, see my other reply to Thomas.\n>\n>\n\nMy understanding was that Fcntl only exported constants known to the OS. \nThat's certainly what its docco suggests, e.g.:\n\n By default your system's F_* and O_* constants (eg, F_DUPFD and \nO_CREAT)\n and the FD_CLOEXEC constant are exported into your namespace.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-12 We 10:23, Dagfinn Ilmari\n Mannsåker wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\n\nOn 2023-04-12 We 01:48, Thomas Munro wrote:\n\n\nOn Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n\n\nOn Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n\n\nI'm hitting a panic in t_004_io_direct. The build is running on\noverlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\ncombination but has worked well for building everything over the last\ndecade. On Debian unstable:\n\nPANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n\n\n... I have a new idea: perhaps it is possible to try\nto open a file with O_DIRECT from perl, and if it fails like that,\nskip the test. Looking into that now.\n\n\nI think I have that working OK. Any Perl hackers want to comment on\nmy use of IO::File (copied from examples on the internet that showed\nhow to use O_DIRECT)? I am not much of a perl hacker but according to\nmy package manager, IO/File.pm came with perl itself. And the Fcntl\neval trick that I copied from File::stat, and the perl-critic\nsuppression that requires?\n\n\n\n\nI think you can probably replace a lot of the magic here by simply saying\n\n\nif (Fcntl->can(\"O_DIRECT\")) ...\n\n\n\nFcntl->can() is true for all constants that Fcntl knows about, whether\nor not they are defined for your OS. `defined &O_DIRECT` is the simplest\ncheck, see my other reply to Thomas.\n\n\n\n\n\n\nMy understanding was that Fcntl only exported constants known to\n the OS. That's certainly what its docco suggests, e.g.:\n By default your system's F_* and O_* constants (eg, F_DUPFD\n and O_CREAT)\n and the FD_CLOEXEC constant are exported into your namespace.\n\n\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 12 Apr 2023 12:12:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On 2023-04-12 We 10:23, Dagfinn Ilmari Mannsåker wrote:\n>> Andrew Dunstan<andrew@dunslane.net> writes:\n>>\n>>> On 2023-04-12 We 01:48, Thomas Munro wrote:\n>>>> On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n>>>>> On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n>>>>>> I'm hitting a panic in t_004_io_direct. The build is running on\n>>>>>> overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n>>>>>> combination but has worked well for building everything over the last\n>>>>>> decade. On Debian unstable:\n>>>>>>\n>>>>>> PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n>>>>> ... I have a new idea: perhaps it is possible to try\n>>>>> to open a file with O_DIRECT from perl, and if it fails like that,\n>>>>> skip the test. Looking into that now.\n>>>> I think I have that working OK. Any Perl hackers want to comment on\n>>>> my use of IO::File (copied from examples on the internet that showed\n>>>> how to use O_DIRECT)? I am not much of a perl hacker but according to\n>>>> my package manager, IO/File.pm came with perl itself. And the Fcntl\n>>>> eval trick that I copied from File::stat, and the perl-critic\n>>>> suppression that requires?\n>>>\n>>> I think you can probably replace a lot of the magic here by simply saying\n>>>\n>>>\n>>> if (Fcntl->can(\"O_DIRECT\")) ...\n>> Fcntl->can() is true for all constants that Fcntl knows about, whether\n>> or not they are defined for your OS. `defined &O_DIRECT` is the simplest\n>> check, see my other reply to Thomas.\n>>\n>>\n>\n> My understanding was that Fcntl only exported constants known to the\n> OS. That's certainly what its docco suggests, e.g.:\n>\n> By default your system's F_* and O_* constants (eg, F_DUPFD and\n> O_CREAT)\n> and the FD_CLOEXEC constant are exported into your namespace.\n\nIt's a bit more magical than that (this is Perl after all). They are\nall exported (which implicitly creates stubs visible to `->can()`,\nsimilarly to forward declarations like `sub O_FOO;`), but only the\ndefined ones (`#ifdef O_FOO` is true) are defined (`defined &O_FOO` is\ntrue). The rest fall through to an AUTOLOAD¹ function that throws an\nexception for undefined ones.\n\nHere's an example (Fcntl knows O_RAW, but Linux does not define it):\n\n $ perl -E '\n use strict; use Fcntl;\n say \"can\", main->can(\"O_RAW\") ? \"\" : \"not\";\n say defined &O_RAW ? \"\" : \"not \", \"defined\";\n say O_RAW;'\n can\n not defined\n Your vendor has not defined Fcntl macro O_RAW, used at -e line 4\n\nWhile O_DIRECT is defined:\n\n $ perl -E '\n use strict; use Fcntl;\n say \"can\", main->can(\"O_DIRECT\") ? \"\" : \"not\";\n say defined &O_DIRECT ? \"\" : \"not \", \"defined\";\n say O_DIRECT;'\n can\n defined\n 16384\n\nAnd O_FOO is unknown to Fcntl (the parens on `O_FOO()q are to make it\nnot a bareword, which would be a compile error under `use strict;` when\nthe sub doesn't exist at all):\n\n $ perl -E '\n use strict; use Fcntl;\n say \"can\", main->can(\"O_FOO\") ? \"\" : \"not\";\n say defined &O_FOO ? \"\" : \"not \", \"defined\";\n say O_FOO();'\n cannot\n not defined\n Undefined subroutine &main::O_FOO called at -e line 4.\n\n> cheers\n>\n>\n> andrew\n\n- ilmari\n\n[1] https://perldoc.perl.org/perlsub#Autoloading\n\n\n",
"msg_date": "Wed, 12 Apr 2023 17:38:06 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-12 We 12:38, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>\n>> On 2023-04-12 We 10:23, Dagfinn Ilmari Mannsåker wrote:\n>>> Andrew Dunstan<andrew@dunslane.net> writes:\n>>>\n>>>> On 2023-04-12 We 01:48, Thomas Munro wrote:\n>>>>> On Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n>>>>>> On Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n>>>>>>> I'm hitting a panic in t_004_io_direct. The build is running on\n>>>>>>> overlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\n>>>>>>> combination but has worked well for building everything over the last\n>>>>>>> decade. On Debian unstable:\n>>>>>>>\n>>>>>>> PANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n>>>>>> ... I have a new idea: perhaps it is possible to try\n>>>>>> to open a file with O_DIRECT from perl, and if it fails like that,\n>>>>>> skip the test. Looking into that now.\n>>>>> I think I have that working OK. Any Perl hackers want to comment on\n>>>>> my use of IO::File (copied from examples on the internet that showed\n>>>>> how to use O_DIRECT)? I am not much of a perl hacker but according to\n>>>>> my package manager, IO/File.pm came with perl itself. And the Fcntl\n>>>>> eval trick that I copied from File::stat, and the perl-critic\n>>>>> suppression that requires?\n>>>> I think you can probably replace a lot of the magic here by simply saying\n>>>>\n>>>>\n>>>> if (Fcntl->can(\"O_DIRECT\")) ...\n>>> Fcntl->can() is true for all constants that Fcntl knows about, whether\n>>> or not they are defined for your OS. `defined &O_DIRECT` is the simplest\n>>> check, see my other reply to Thomas.\n>>>\n>>>\n>> My understanding was that Fcntl only exported constants known to the\n>> OS. That's certainly what its docco suggests, e.g.:\n>>\n>> By default your system's F_* and O_* constants (eg, F_DUPFD and\n>> O_CREAT)\n>> and the FD_CLOEXEC constant are exported into your namespace.\n> It's a bit more magical than that (this is Perl after all). They are\n> all exported (which implicitly creates stubs visible to `->can()`,\n> similarly to forward declarations like `sub O_FOO;`), but only the\n> defined ones (`#ifdef O_FOO` is true) are defined (`defined &O_FOO` is\n> true). The rest fall through to an AUTOLOAD¹ function that throws an\n> exception for undefined ones.\n>\n> Here's an example (Fcntl knows O_RAW, but Linux does not define it):\n>\n> $ perl -E '\n> use strict; use Fcntl;\n> say \"can\", main->can(\"O_RAW\") ? \"\" : \"not\";\n> say defined &O_RAW ? \"\" : \"not \", \"defined\";\n> say O_RAW;'\n> can\n> not defined\n> Your vendor has not defined Fcntl macro O_RAW, used at -e line 4\n>\n> While O_DIRECT is defined:\n>\n> $ perl -E '\n> use strict; use Fcntl;\n> say \"can\", main->can(\"O_DIRECT\") ? \"\" : \"not\";\n> say defined &O_DIRECT ? \"\" : \"not \", \"defined\";\n> say O_DIRECT;'\n> can\n> defined\n> 16384\n>\n> And O_FOO is unknown to Fcntl (the parens on `O_FOO()q are to make it\n> not a bareword, which would be a compile error under `use strict;` when\n> the sub doesn't exist at all):\n>\n> $ perl -E '\n> use strict; use Fcntl;\n> say \"can\", main->can(\"O_FOO\") ? \"\" : \"not\";\n> say defined &O_FOO ? \"\" : \"not \", \"defined\";\n> say O_FOO();'\n> cannot\n> not defined\n> Undefined subroutine &main::O_FOO called at -e line 4.\n>\n>\n\n*grumble* a bit too magical for my taste. Thanks for the correction.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-12 We 12:38, Dagfinn Ilmari\n Mannsåker wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\n\nOn 2023-04-12 We 10:23, Dagfinn Ilmari Mannsåker wrote:\n\n\nAndrew Dunstan<andrew@dunslane.net> writes:\n\n\n\nOn 2023-04-12 We 01:48, Thomas Munro wrote:\n\n\nOn Wed, Apr 12, 2023 at 3:04 PM Thomas Munro<thomas.munro@gmail.com> wrote:\n\n\nOn Wed, Apr 12, 2023 at 2:56 PM Christoph Berg<myon@debian.org> wrote:\n\n\nI'm hitting a panic in t_004_io_direct. The build is running on\noverlayfs on tmpfs/ext4 (upper/lower) which is probably a weird\ncombination but has worked well for building everything over the last\ndecade. On Debian unstable:\n\nPANIC: could not open file \"pg_wal/000000010000000000000001\": Invalid argument\n\n\n... I have a new idea: perhaps it is possible to try\nto open a file with O_DIRECT from perl, and if it fails like that,\nskip the test. Looking into that now.\n\n\nI think I have that working OK. Any Perl hackers want to comment on\nmy use of IO::File (copied from examples on the internet that showed\nhow to use O_DIRECT)? I am not much of a perl hacker but according to\nmy package manager, IO/File.pm came with perl itself. And the Fcntl\neval trick that I copied from File::stat, and the perl-critic\nsuppression that requires?\n\n\n\nI think you can probably replace a lot of the magic here by simply saying\n\n\nif (Fcntl->can(\"O_DIRECT\")) ...\n\n\nFcntl->can() is true for all constants that Fcntl knows about, whether\nor not they are defined for your OS. `defined &O_DIRECT` is the simplest\ncheck, see my other reply to Thomas.\n\n\n\n\n\nMy understanding was that Fcntl only exported constants known to the\nOS. That's certainly what its docco suggests, e.g.:\n\n By default your system's F_* and O_* constants (eg, F_DUPFD and\nO_CREAT)\n and the FD_CLOEXEC constant are exported into your namespace.\n\n\n\nIt's a bit more magical than that (this is Perl after all). They are\nall exported (which implicitly creates stubs visible to `->can()`,\nsimilarly to forward declarations like `sub O_FOO;`), but only the\ndefined ones (`#ifdef O_FOO` is true) are defined (`defined &O_FOO` is\ntrue). The rest fall through to an AUTOLOAD¹ function that throws an\nexception for undefined ones.\n\nHere's an example (Fcntl knows O_RAW, but Linux does not define it):\n\n $ perl -E '\n use strict; use Fcntl;\n say \"can\", main->can(\"O_RAW\") ? \"\" : \"not\";\n say defined &O_RAW ? \"\" : \"not \", \"defined\";\n say O_RAW;'\n can\n not defined\n Your vendor has not defined Fcntl macro O_RAW, used at -e line 4\n\nWhile O_DIRECT is defined:\n\n $ perl -E '\n use strict; use Fcntl;\n say \"can\", main->can(\"O_DIRECT\") ? \"\" : \"not\";\n say defined &O_DIRECT ? \"\" : \"not \", \"defined\";\n say O_DIRECT;'\n can\n defined\n 16384\n\nAnd O_FOO is unknown to Fcntl (the parens on `O_FOO()q are to make it\nnot a bareword, which would be a compile error under `use strict;` when\nthe sub doesn't exist at all):\n\n $ perl -E '\n use strict; use Fcntl;\n say \"can\", main->can(\"O_FOO\") ? \"\" : \"not\";\n say defined &O_FOO ? \"\" : \"not \", \"defined\";\n say O_FOO();'\n cannot\n not defined\n Undefined subroutine &main::O_FOO called at -e line 4.\n\n\n\n\n\n\n*grumble* a bit too magical for my taste. Thanks for the\n correction.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 12 Apr 2023 14:57:46 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thanks both for looking, and thanks for the explanation Ilmari.\nPushed with your improvements. The 4 CI systems run the tests\n(Windows and Mac by special always-expected-to-work case, Linux and\nFreeBSD by successful pre-flight perl test of O_DIRECT), and I also\ntested three unusual systems, two that skip for different reasons and\none that will henceforth run this test on the build farm so I wanted\nto confirm locally first:\n\nLinux/tmpfs: 1..0 # SKIP pre-flight test if we can open a file with\nO_DIRECT failed: Invalid argument\nOpenBSD: t/004_io_direct.pl .............. skipped: no O_DIRECT\nillumos: t/004_io_direct.pl .............. ok\n\n(Format different because those last two are autoconf, no meson on my\ncollection of Vagrant images yet...)\n\n\n",
"msg_date": "Thu, 13 Apr 2023 14:04:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Re: Thomas Munro\n> Linux/tmpfs: 1..0 # SKIP pre-flight test if we can open a file with\n> O_DIRECT failed: Invalid argument\n\nI confirm it's working now:\n\nt/004_io_direct.pl .............. skipped: pre-flight test if we can open a file with O_DIRECT failed: Invalid argument\nAll tests successful.\n\nThanks,\nChristoph\n\n\n",
"msg_date": "Thu, 13 Apr 2023 15:31:40 -0700",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Since the direct I/O commit went in, buildfarm animals\ncurculio and morepork have been issuing warnings like\n\nhashpage.c: In function '_hash_expandtable':\nhashpage.c:995: warning: ignoring alignment for stack allocated 'zerobuf'\n\nin places where there's a local variable of type PGIOAlignedBlock\nor PGAlignedXLogBlock. I'm not sure why only those two animals\nare unhappy, but I think they have a point: typical ABIs don't\nguarantee alignment of function stack frames to better than\n16 bytes or so. In principle the compiler could support a 4K\nalignment request anyway by doing the equivalent of alloca(3),\nbut I do not think we can count on that to happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 13:21:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-14 13:21:33 -0400, Tom Lane wrote:\n> Since the direct I/O commit went in, buildfarm animals\n> curculio and morepork have been issuing warnings like\n> \n> hashpage.c: In function '_hash_expandtable':\n> hashpage.c:995: warning: ignoring alignment for stack allocated 'zerobuf'\n> \n> in places where there's a local variable of type PGIOAlignedBlock\n> or PGAlignedXLogBlock. I'm not sure why only those two animals\n> are unhappy, but I think they have a point: typical ABIs don't\n> guarantee alignment of function stack frames to better than\n> 16 bytes or so. In principle the compiler could support a 4K\n> alignment request anyway by doing the equivalent of alloca(3),\n> but I do not think we can count on that to happen.\n\nHm. New-ish compilers seem to be ok with it. Perhaps we should have a\nconfigure check whether the compiler is OK with that, and disable direct IO\nsupport if not?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Apr 2023 11:56:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-14 13:21:33 -0400, Tom Lane wrote:\n>> ... I'm not sure why only those two animals\n>> are unhappy, but I think they have a point: typical ABIs don't\n>> guarantee alignment of function stack frames to better than\n>> 16 bytes or so. In principle the compiler could support a 4K\n>> alignment request anyway by doing the equivalent of alloca(3),\n>> but I do not think we can count on that to happen.\n\n> Hm. New-ish compilers seem to be ok with it.\n\nOh! I was misled by the buildfarm label on morepork, which claims\nit's running clang 10.0.1. But actually, per its configure report,\nit's running\n\n\tconfigure: using compiler=gcc (GCC) 4.2.1 20070719 \n\nwhich is the same as curculio. So that explains why nothing else is\ncomplaining. I agree we needn't let 15-year-old compilers force us\ninto the mess that would be entailed by not treating these variables\nas simple locals.\n\n> Perhaps we should have a\n> configure check whether the compiler is OK with that, and disable direct IO\n> support if not?\n\n+1 for that, though. (Also, the fact that these animals aren't\nactually failing suggests that 004_io_direct.pl needs expansion.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 15:21:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-14 15:21:18 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-04-14 13:21:33 -0400, Tom Lane wrote:\n> >> ... I'm not sure why only those two animals\n> >> are unhappy, but I think they have a point: typical ABIs don't\n> >> guarantee alignment of function stack frames to better than\n> >> 16 bytes or so. In principle the compiler could support a 4K\n> >> alignment request anyway by doing the equivalent of alloca(3),\n> >> but I do not think we can count on that to happen.\n> \n> > Hm. New-ish compilers seem to be ok with it.\n> \n> Oh! I was misled by the buildfarm label on morepork, which claims\n> it's running clang 10.0.1. But actually, per its configure report,\n> it's running\n> \n> \tconfigure: using compiler=gcc (GCC) 4.2.1 20070719 \n\nHuh. I wonder if that was an accident in the BF setup.\n\n\n> > Perhaps we should have a\n> > configure check whether the compiler is OK with that, and disable direct IO\n> > support if not?\n> \n> +1 for that, though. (Also, the fact that these animals aren't\n> actually failing suggests that 004_io_direct.pl needs expansion.)\n\nIt's skipped, due to lack of O_DIRECT:\n[20:50:22] t/004_io_direct.pl .............. skipped: no O_DIRECT\n\nSo perhaps we don't even need a configure test, just a bit of ifdef'ery? It's\na bit annoying structurally, because the PG*Aligned structs are defined in\nc.h, but the different ways of spelling O_DIRECT are dealt with in fd.h.\n\nI wonder if we should try to move those structs to fd.h as well...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Apr 2023 12:33:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-14 15:21:18 -0400, Tom Lane wrote:\n>> +1 for that, though. (Also, the fact that these animals aren't\n>> actually failing suggests that 004_io_direct.pl needs expansion.)\n\n> It's skipped, due to lack of O_DIRECT:\n> [20:50:22] t/004_io_direct.pl .............. skipped: no O_DIRECT\n\nHmm, I'd say that might be just luck. Whether the compiler honors weird\nalignment of locals seems independent of whether the OS has O_DIRECT.\n\n> So perhaps we don't even need a configure test, just a bit of ifdef'ery? It's\n> a bit annoying structurally, because the PG*Aligned structs are defined in\n> c.h, but the different ways of spelling O_DIRECT are dealt with in fd.h.\n\n> I wonder if we should try to move those structs to fd.h as well...\n\nI doubt they belong in c.h, so that could be plausible; except\nI'm not convinced that testing O_DIRECT is sufficient.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 15:38:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\n\nOn 2023-04-14 21:33, Andres Freund wrote:\n\n>> Oh! I was misled by the buildfarm label on morepork, which claims\n>> it's running clang 10.0.1. But actually, per its configure report,\n>> it's running\n>>\n>> \tconfigure: using compiler=gcc (GCC) 4.2.1 20070719\n> \n> Huh. I wonder if that was an accident in the BF setup.\n\nI might have been when I reinstalled it a while ago.\n\nI have the following gcc and clang installed:\n\nopenbsd_6_9-pgbf$ gcc --version\ngcc (GCC) 4.2.1 20070719\nCopyright (C) 2007 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\nopenbsd_6_9-pgbf$ clang --version\nOpenBSD clang version 10.0.1\nTarget: amd64-unknown-openbsd6.9\nThread model: posix\nInstalledDir: /usr/bin\n\nwant me to switch to clang instead?\n\n/Mikael\n\n\n",
"msg_date": "Fri, 14 Apr 2023 21:50:29 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 7:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-04-14 15:21:18 -0400, Tom Lane wrote:\n> >> +1 for that, though. (Also, the fact that these animals aren't\n> >> actually failing suggests that 004_io_direct.pl needs expansion.)\n>\n> > It's skipped, due to lack of O_DIRECT:\n> > [20:50:22] t/004_io_direct.pl .............. skipped: no O_DIRECT\n>\n> Hmm, I'd say that might be just luck. Whether the compiler honors weird\n> alignment of locals seems independent of whether the OS has O_DIRECT.\n>\n> > So perhaps we don't even need a configure test, just a bit of ifdef'ery? It's\n> > a bit annoying structurally, because the PG*Aligned structs are defined in\n> > c.h, but the different ways of spelling O_DIRECT are dealt with in fd.h.\n>\n> > I wonder if we should try to move those structs to fd.h as well...\n>\n> I doubt they belong in c.h, so that could be plausible; except\n> I'm not convinced that testing O_DIRECT is sufficient.\n\nAs far as I can tell, the failure to honour large alignment attributes\neven though the compiler passes our configure check that you can do\nthat was considered to be approximately a bug[1] or at least a thing\nto be improved in fairly old GCC times but the fix wasn't back-patched\nthat far. Unfortunately the projects that were allergic to the GPL3\nchange but wanted to ship a compiler (or some motivation related to\nthat) got stuck on 4.2 for a while before they flipped to Clang (as\nOpenBSD has now done). It seems hard to get excited about doing\nanything about that on our side, and those systems are also spewing\nother warnings. But if we're going to do it, it looks like the right\nplace would indeed be a new compiler check that the attribute exists\n*and* generates no warnings with alignment > 16, something like that?\n\nhttps://gcc.gnu.org/bugzilla/show_bug.cgi?id=16660\n\n\n",
"msg_date": "Sat, 15 Apr 2023 11:16:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 7:50 AM Mikael Kjellström\n<mikael.kjellstrom@gmail.com> wrote:\n> want me to switch to clang instead?\n\nI vote +1, that's the system compiler in modern OpenBSD.\n\nhttps://www.cambus.net/the-state-of-toolchains-in-openbsd/\n\nAs for curculio, I don't understand the motivation for maintaining\nthat machine. I'd rather know if OpenBSD 7.3 works.\n\n\n",
"msg_date": "Sat, 15 Apr 2023 12:26:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Apr 15, 2023 at 7:50 AM Mikael Kjellström\n> <mikael.kjellstrom@gmail.com> wrote:\n>> want me to switch to clang instead?\n\n> I vote +1, that's the system compiler in modern OpenBSD.\n\nDitto, we need coverage of that.\n\n> As for curculio, I don't understand the motivation for maintaining\n> that machine. I'd rather know if OpenBSD 7.3 works.\n\nThose aren't necessarily mutually exclusive :-). But I do agree\nthat recent OpenBSD is more important to cover than ancient OpenBSD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 23:22:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\n\nOn 2023-04-15 05:22, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Sat, Apr 15, 2023 at 7:50 AM Mikael Kjellström\n>> <mikael.kjellstrom@gmail.com> wrote:\n>>> want me to switch to clang instead?\n> \n>> I vote +1, that's the system compiler in modern OpenBSD.\n> \n> Ditto, we need coverage of that.\n\nOK. I switched to clang on morepork now.\n\n\n>> As for curculio, I don't understand the motivation for maintaining\n>> that machine. I'd rather know if OpenBSD 7.3 works.\n> \n> Those aren't necessarily mutually exclusive :-). But I do agree\n> that recent OpenBSD is more important to cover than ancient OpenBSD.\n\nSo do you want me to switch that machine to OpenBSD 7.3 instead?\n\n/Mikael\n\n\n\n\n",
"msg_date": "Sat, 15 Apr 2023 07:48:52 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> As far as I can tell, the failure to honour large alignment attributes\n> even though the compiler passes our configure check that you can do\n> that was considered to be approximately a bug[1] or at least a thing\n> to be improved in fairly old GCC times but the fix wasn't back-patched\n> that far. Unfortunately the projects that were allergic to the GPL3\n> change but wanted to ship a compiler (or some motivation related to\n> that) got stuck on 4.2 for a while before they flipped to Clang (as\n> OpenBSD has now done). It seems hard to get excited about doing\n> anything about that on our side, and those systems are also spewing\n> other warnings. But if we're going to do it, it looks like the right\n> place would indeed be a new compiler check that the attribute exists\n> *and* generates no warnings with alignment > 16, something like that?\n\nI tested this by building gcc 4.2.1 from source on modern Linux\n(which was a bit more painful than it ought to be, perhaps)\nand building PG with that. It generates no warnings, but nonetheless\ngets an exception in CREATE DATABASE:\n\n#2 0x0000000000b64522 in ExceptionalCondition (\n conditionName=0xd4fca0 \"(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)\", fileName=0xd4fbe0 \"md.c\", lineNumber=468) at assert.c:66\n#3 0x00000000009a6b53 in mdextend (reln=0x1dcaf68, forknum=MAIN_FORKNUM, \n blocknum=18, buffer=0x7ffcaf8e1af0, skipFsync=true) at md.c:468\n#4 0x00000000009a9075 in smgrextend (reln=0x1dcaf68, forknum=MAIN_FORKNUM, \n blocknum=18, buffer=0x7ffcaf8e1af0, skipFsync=true) at smgr.c:500\n#5 0x000000000096739c in RelationCopyStorageUsingBuffer (srclocator=..., \n dstlocator=..., forkNum=MAIN_FORKNUM, permanent=true) at bufmgr.c:4286\n#6 0x0000000000967584 in CreateAndCopyRelationData (src_rlocator=..., \n dst_rlocator=..., permanent=true) at bufmgr.c:4361\n#7 0x000000000068898e in CreateDatabaseUsingWalLog (src_dboid=1, \n dst_dboid=24576, src_tsid=1663, dst_tsid=1663) at dbcommands.c:217\n#8 0x000000000068b594 in createdb (pstate=0x1d4a6a8, stmt=0x1d20ec8)\n at dbcommands.c:1441\n\nSure enough, that buffer is a stack local in\nRelationCopyStorageUsingBuffer, and it's visibly got a\nnot-very-well-aligned address.\n\nSo apparently, the fact that you even get a warning about the\nalignment not being honored is something OpenBSD patched in\nafter-the-fact; it's not there in genuine vintage gcc.\n\nI get the impression that we are going to need an actual runtime\ntest if we want to defend against this. Not entirely convinced\nit's worth the trouble. Who, other than our deliberately rear-guard\nbuildfarm animals, is going to be building modern PG with such old\ncompilers? (And more especially to the point, on platforms new\nenough to have working O_DIRECT?)\n\nAt this point I agree with Andres that it'd be good enough to\nsilence the warning by getting rid of these alignment pragmas\nwhen the platform lacks O_DIRECT.\n\n\t\t\tregards, tom lane\n\nPS: I don't quite understand how it managed to get through initdb\nwhen CREATE DATABASE doesn't work. Maybe there is a different\ncode path taken in standalone mode?\n\n\n",
"msg_date": "Sat, 15 Apr 2023 14:19:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 16, 2023 at 6:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So apparently, the fact that you even get a warning about the\n> alignment not being honored is something OpenBSD patched in\n> after-the-fact; it's not there in genuine vintage gcc.\n\nAh, that is an interesting discovery, and indeed kills the configure check idea.\n\n> At this point I agree with Andres that it'd be good enough to\n> silence the warning by getting rid of these alignment pragmas\n> when the platform lacks O_DIRECT.\n\nHmm. My preferred choice would be: accept Mikael's kind offer to\nupgrade curculio to a live version, forget about GCC 4.2.1 forever,\nand do nothing. It is a dead parrot.\n\nBut if we really want to do something about this, my next preferred\noption would be to modify c.h's test to add more conditions, here:\n\n/* GCC, Sunpro and XLC support aligned, packed and noreturn */\n#if defined(__GNUC__) || defined(__SUNPRO_C) || defined(__IBMC__)\n#define pg_attribute_aligned(a) __attribute__((aligned(a)))\n...\n\nFull GCC support including stack objects actually began in 4.6, it\nseems. It might require a bit of research because the GCC-workalikes\nincluding Clang also claim to be certain versions of GCC (for example\nI think Clang 7 would be excluded if you excluded GCC 4.2, even though\nthis particular thing apparently worked fine in Clang 7). That's my\nbest idea, ie to actually model the feature history accurately, if we\nare suspending disbelief and pretending that it is a reasonable\ntarget.\n\n\n",
"msg_date": "Sun, 16 Apr 2023 09:48:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Full GCC support including stack objects actually began in 4.6, it\n> seems.\n\nHmm. The oldest gcc versions remaining in the buildfarm seem to be\n\n curculio | configure: using compiler=gcc (GCC) 4.2.1 20070719 \n frogfish | configure: using compiler=gcc (Debian 4.6.3-14) 4.6.3\n lapwing | configure: using compiler=gcc (Debian 4.7.2-5) 4.7.2\n skate | configure: using compiler=gcc-4.7 (Debian 4.7.2-5) 4.7.2\n snapper | configure: using compiler=gcc-4.7 (Debian 4.7.2-5) 4.7.2\n buri | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n chub | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n dhole | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n mantid | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n prion | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)\n rhinoceros | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n siskin | configure: using compiler=gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n shelduck | configure: using compiler=gcc (SUSE Linux) 4.8.5\n topminnow | configure: using compiler=gcc (Debian 4.9.2-10+deb8u1) 4.9.2\n\nso curculio should be the only one that's at risk here.\nMaybe just upgrading it is the right answer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 15 Apr 2023 18:10:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 02:19:35PM -0400, Tom Lane wrote:\n> PS: I don't quite understand how it managed to get through initdb\n> when CREATE DATABASE doesn't work. Maybe there is a different\n> code path taken in standalone mode?\n\nad43a413c4f7f5d024a5b2f51e00d280a22f1874\n initdb: When running CREATE DATABASE, use STRATEGY = WAL_COPY.\n\n\n",
"msg_date": "Sat, 15 Apr 2023 17:20:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\n\nOn 2023-04-16 00:10, Tom Lane wrote:\n\n> so curculio should be the only one that's at risk here.\n> Maybe just upgrading it is the right answer.\n\nJust let me know if I should switch curculio to OpenBSD 7.3.\n\nI already have a new server setup so only need to switch the \"animal\" \nand \"secret\" and enable the cron job to get it running.\n\n/Mikael\n\n\n",
"msg_date": "Sun, 16 Apr 2023 09:54:15 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n> On 2023-04-16 00:10, Tom Lane wrote:\n>> so curculio should be the only one that's at risk here.\n>> Maybe just upgrading it is the right answer.\n\n> Just let me know if I should switch curculio to OpenBSD 7.3.\n\nYes please.\n\n> I already have a new server setup so only need to switch the \"animal\" \n> and \"secret\" and enable the cron job to get it running.\n\nActually, as long as it's still OpenBSD I think you can keep using\nthe same animal name ... Andrew, what's the policy on that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 10:18:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\nOn 2023-04-16 16:18, Tom Lane wrote:\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n>> On 2023-04-16 00:10, Tom Lane wrote:\n>>> so curculio should be the only one that's at risk here.\n>>> Maybe just upgrading it is the right answer.\n> \n>> Just let me know if I should switch curculio to OpenBSD 7.3.\n> \n> Yes please.\n\nOk.\n\n\n>> I already have a new server setup so only need to switch the \"animal\"\n>> and \"secret\" and enable the cron job to get it running.\n> \n> Actually, as long as it's still OpenBSD I think you can keep using\n> the same animal name ... Andrew, what's the policy on that?\n\nThat is what I meant with above.\n\nI just use the same animal name and secret and then run \n\"update_personality.pl\".\n\nThat should be enough I think?\n\n/Mikael\n\n\n\n",
"msg_date": "Sun, 16 Apr 2023 16:51:04 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 2023-04-16 Su 10:18, Tom Lane wrote:\n> =?UTF-8?Q?Mikael_Kjellstr=c3=b6m?=<mikael.kjellstrom@mksoft.nu> writes:\n>> On 2023-04-16 00:10, Tom Lane wrote:\n>>> so curculio should be the only one that's at risk here.\n>>> Maybe just upgrading it is the right answer.\n>> Just let me know if I should switch curculio to OpenBSD 7.3.\n> Yes please.\n>\n>> I already have a new server setup so only need to switch the \"animal\"\n>> and \"secret\" and enable the cron job to get it running.\n> Actually, as long as it's still OpenBSD I think you can keep using\n> the same animal name ... Andrew, what's the policy on that?\n>\n> \t\t\t\n\n\nupdate_personality.pl lets you update the OS version / compiler version \n/ owner-name / owner-email\n\n\nI am in fact about to perform this exact operation for prion.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-16 Su 10:18, Tom Lane wrote:\n\n\n=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu> writes:\n\n\nOn 2023-04-16 00:10, Tom Lane wrote:\n\n\nso curculio should be the only one that's at risk here.\nMaybe just upgrading it is the right answer.\n\n\n\n\n\n\nJust let me know if I should switch curculio to OpenBSD 7.3.\n\n\n\nYes please.\n\n\n\nI already have a new server setup so only need to switch the \"animal\" \nand \"secret\" and enable the cron job to get it running.\n\n\n\nActually, as long as it's still OpenBSD I think you can keep using\nthe same animal name ... Andrew, what's the policy on that?\n\n\t\t\t\n\n\n\nupdate_personality.pl lets you update the OS version / compiler\n version / owner-name / owner-email\n\n\nI am in fact about to perform this exact operation for prion.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 16 Apr 2023 11:29:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-04-16 Su 10:18, Tom Lane wrote:\n>> Actually, as long as it's still OpenBSD I think you can keep using\n>> the same animal name ... Andrew, what's the policy on that?\n\n> update_personality.pl lets you update the OS version / compiler version \n> / owner-name / owner-email\n\nOh wait ... this involves a switch from gcc in OpenBSD 5.9 to clang\nin OpenBSD 7.3, doesn't it? That isn't something update_personality\nwill handle; you need a new animal if the compiler product is changing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 12:16:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\n\n> On Apr 16, 2023, at 12:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> On 2023-04-16 Su 10:18, Tom Lane wrote:\n>>> Actually, as long as it's still OpenBSD I think you can keep using\n>>> the same animal name ... Andrew, what's the policy on that?\n> \n>> update_personality.pl lets you update the OS version / compiler version \n>> / owner-name / owner-email\n> \n> Oh wait ... this involves a switch from gcc in OpenBSD 5.9 to clang\n> in OpenBSD 7.3, doesn't it? That isn't something update_personality\n> will handle; you need a new animal if the compiler product is changing.\n> \n> \n\nCorrect.\n\nCheers\n\nAndrew\n\n",
"msg_date": "Sun, 16 Apr 2023 13:59:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\n\nOn 2023-04-16 19:59, Andrew Dunstan wrote:\n\n>> On Apr 16, 2023, at 12:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>>> On 2023-04-16 Su 10:18, Tom Lane wrote:\n>>>> Actually, as long as it's still OpenBSD I think you can keep using\n>>>> the same animal name ... Andrew, what's the policy on that?\n>>\n>>> update_personality.pl lets you update the OS version / compiler version\n>>> / owner-name / owner-email\n>>\n>> Oh wait ... this involves a switch from gcc in OpenBSD 5.9 to clang\n>> in OpenBSD 7.3, doesn't it? That isn't something update_personality\n>> will handle; you need a new animal if the compiler product is changing.\n>>\n>> \n> \n> Correct.\n\nOK. I registered a new animal for this then.\n\nSo if someone could look at that and give be an animal name + secret I \ncan set this up.\n\n/Mikael\n\n\n\n",
"msg_date": "Sun, 16 Apr 2023 20:05:02 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 16, 2023 at 04:51:04PM +0200, Mikael Kjellström wrote:\n> That is what I meant with above.\n> \n> I just use the same animal name and secret and then run\n> \"update_personality.pl\".\n> \n> That should be enough I think?\n\nYes, that should be enough as far as I recall. This has been\nmentioned a couple of weeks ago here:\nhttps://www.postgresql.org/message-id/CA+hUKGK0jJ+G+bxLUZqpBsxpvEg7Lvt1v8LBxFkZbrvtFTSghw@mail.gmail.com\n\nI have also used setnotes.pl to reflect my animals' CFLAGS on the\nwebsite.\n--\nMichael",
"msg_date": "Mon, 17 Apr 2023 07:22:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "\n\nOn 2023-04-16 20:05, Mikael Kjellström wrote:\n\n>>> Oh wait ... this involves a switch from gcc in OpenBSD 5.9 to clang\n>>> in OpenBSD 7.3, doesn't it? That isn't something update_personality\n>>> will handle; you need a new animal if the compiler product is changing.\n>>>\n>>\n>> Correct.\n> \n> OK. I registered a new animal for this then.\n> \n> So if someone could look at that and give be an animal name + secret I \n> can set this up.\n\nI have setup a new animal \"schnauzer\" (thanks andrew!).\n\nThat should report in a little while.\n\n/Mikael\n\n\n\n",
"msg_date": "Mon, 17 Apr 2023 07:44:36 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 2:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I get the impression that we are going to need an actual runtime\n> test if we want to defend against this. Not entirely convinced\n> it's worth the trouble. Who, other than our deliberately rear-guard\n> buildfarm animals, is going to be building modern PG with such old\n> compilers? (And more especially to the point, on platforms new\n> enough to have working O_DIRECT?)\n\nI don't think that I fully understand everything under discussion\nhere, but I would just like to throw in a vote for trying to make\nfailures as comprehensible as we reasonably can. It makes me a bit\nnervous to rely on things like \"anybody who has O_DIRECT will also\nhave working alignment pragmas,\" because there's no relationship\nbetween those things other than when we think they got implemented on\nthe platforms that are popular today. If somebody ships me a brand new\nDeathstation 9000 that has O_DIRECT but NOT alignment pragmas, how\nbadly are things going to break and how hard is it going to be for me\nto understand why it's not working?\n\nI understand that nobody (including me) wants the code cluttered with\na bunch of useless cruft that caters only to hypothetical systems, and\nI don't want us to spend a lot of effort building untestable\ninfrastructure that caters only to such machines. I just don't want us\nto do things that are more magical than they need to be. If and when\nsomething fails, it's real nice if you can easily understand why it\nfailed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 11:44:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Apr 15, 2023 at 2:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I get the impression that we are going to need an actual runtime\n>> test if we want to defend against this. Not entirely convinced\n>> it's worth the trouble. Who, other than our deliberately rear-guard\n>> buildfarm animals, is going to be building modern PG with such old\n>> compilers? (And more especially to the point, on platforms new\n>> enough to have working O_DIRECT?)\n\n> I don't think that I fully understand everything under discussion\n> here, but I would just like to throw in a vote for trying to make\n> failures as comprehensible as we reasonably can.\n\nI'm not hugely concerned about this yet. I think the reason for\nslipping this into v16 as developer-only code is exactly that we need\nto get a feeling for where the portability dragons live. When (and\nif) we try to make O_DIRECT mainstream, yes we'd better be sure that\nany known failure cases are reported well. But we need the data\nabout which those are, first.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 12:06:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 4:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sat, Apr 15, 2023 at 2:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I get the impression that we are going to need an actual runtime\n> >> test if we want to defend against this. Not entirely convinced\n> >> it's worth the trouble. Who, other than our deliberately rear-guard\n> >> buildfarm animals, is going to be building modern PG with such old\n> >> compilers? (And more especially to the point, on platforms new\n> >> enough to have working O_DIRECT?)\n>\n> > I don't think that I fully understand everything under discussion\n> > here, but I would just like to throw in a vote for trying to make\n> > failures as comprehensible as we reasonably can.\n>\n> I'm not hugely concerned about this yet. I think the reason for\n> slipping this into v16 as developer-only code is exactly that we need\n> to get a feeling for where the portability dragons live. When (and\n> if) we try to make O_DIRECT mainstream, yes we'd better be sure that\n> any known failure cases are reported well. But we need the data\n> about which those are, first.\n\n+1\n\nA couple more things I wanted to note:\n\n* We have no plans to turn this on by default even when the later\nasynchronous machinery is proposed, and direct I/O starts to make more\neconomic sense (think: your stream of small reads and writes will be\nconverted to larger preadv/pwritev or moral equivalent and performed\nahead of time in the background). Reasons: (1) There will always be a\nfew file systems that refuse O_DIRECT (Linux tmpfs is one such, as we\nlearned in this thread; if fails with EINVAL at open() time), and (2)\nwithout a page cache, you really need to size your shared_buffers\nadequately and we can't do that automatically. It's something you'd\nopt into for a dedicated database server along with other carefully\nconsidered settings. It seems acceptable to me that if you set\nio_direct to a non-default setting on an unusual-for-a-database-server\nfilesystem you might get errors screaming about inability to open\nfiles -- you'll just have to turn it back off again if it doesn't work\nfor you.\n\n* For the alignment part, C11 has \"alignas(x)\" in <stdalign.h>, so I\nvery much doubt that a hypothetical new Deathstation C compiler would\nnot know how to align stack objects arbitrarily, even though for now\nas a C99 program we have to use the non-standard incantations defined\nin our c.h. I assume we'll eventually switch to that. In the\nmeantime, if someone manages to build PostgreSQL on a hypothetical C\ncompiler that our c.h doesn't recognise, we just won't let you turn\nthe io_direct GUC on (because we set PG_O_DIRECT to 0 if we don't have\nan alignment macro, see commit faeedbce's message for rationale). If\nthe alignment trick from c.h appears to be available but is actually\nbroken (GCC 4.2.1), then those assertions I added into smgrread() et\nal will fail as Tom showed (yay! they did their job), or in a\nnon-assert build you'll probably get EINVAL when you try to read or\nwrite from your badly aligned buffers depending on how picky your OS\nis, but that's just an old bug in a defunct compiler that we have by\nnow written more about they ever did in their bug tracker.\n\n* I guess it's unlikely at this point that POSIX will ever standardise\nO_DIRECT if they didn't already in the 90s (I didn't find any\ndiscussion of it in their issue tracker). There is really only one OS\non our target list that truly can't do direct I/O at all: OpenBSD. It\nseems a reasonable bet that if they or a hypothetical totally new\nUnixoid system ever implemented it they'd spell it the same IRIX way\nfor practical reasons, but if not we just won't use it until someone\nwrites a patch *shrug*. There is also one system that's been rocking\ndirect I/O since the 90s for Oracle etc, but PostgreSQL still doesn't\nknow how to turn it on: Solaris has a directio() system call. I\nposted a (trivial) patch for that once in the thread where I added\nApple F_NOCACHE, but there is probably nobody on this list who can\ntest it successfully (as Tom discovered, wrasse's host is not\nconfigured right for it, you'd need an admin/root to help set up a UFS\nfile system, or perhaps modern (closed) ZFS can do it but that system\nis old and unpatched), and I have no desire to commit a \"blind\" patch\nfor an untested niche setup; I really only considered it because I\nrealised I was so close to covering the complete set of OSes. That's\ncool, we just won't let you turn the GUC on if we don't know how and\nthe error message is clear about that if you try.\n\n\n",
"msg_date": "Tue, 18 Apr 2023 09:44:10 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, 17 Apr 2023 at 17:45, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Reasons: (1) There will always be a\n> few file systems that refuse O_DIRECT (Linux tmpfs is one such, as we\n> learned in this thread; if fails with EINVAL at open() time), and\n\nSo why wouldn't we just automatically turn it off (globally or for\nthat tablespace) and keep operating without it afterward?\n\n> (2) without a page cache, you really need to size your shared_buffers\n> adequately and we can't do that automatically.\n\nWell.... I'm more optimistic... That may not always be impossible.\nWe've already added the ability to add more shared memory after\nstartup. We could implement the ability to add or remove shared buffer\nsegments after startup. And it wouldn't be crazy to imagine a kernel\ninterface that lets us judge whether the kernel memory pressure makes\nit reasonable for us to take more shared buffers or makes it necessary\nto release shared memory to the kernel. You could hack something\ntogether using /proc/meminfo today but I imagine an interface intended\nfor this kind of thing would be better.\n\n> It's something you'd\n> opt into for a dedicated database server along with other carefully\n> considered settings. It seems acceptable to me that if you set\n> io_direct to a non-default setting on an unusual-for-a-database-server\n> filesystem you might get errors screaming about inability to open\n> files -- you'll just have to turn it back off again if it doesn't work\n> for you.\n\nIf the only solution is to turn it off perhaps the server should just\nturn it off? I guess the problem is that the shared_buffers might be\nset assuming it would be on?\n\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 18 Apr 2023 15:35:09 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 3:35 PM Greg Stark <stark@mit.edu> wrote:\n> Well.... I'm more optimistic... That may not always be impossible.\n> We've already added the ability to add more shared memory after\n> startup. We could implement the ability to add or remove shared buffer\n> segments after startup. And it wouldn't be crazy to imagine a kernel\n> interface that lets us judge whether the kernel memory pressure makes\n> it reasonable for us to take more shared buffers or makes it necessary\n> to release shared memory to the kernel.\n\nOn this point specifically, one fairly large problem that we have\ncurrently is that our buffer replacement algorithm is terrible. In\nworkloads I've examined, either almost all buffers end up with a usage\ncount of 5 or almost all buffers end up with a usage count of 0 or 1.\nEither way, we lose all or nearly all information about which buffers\nare actually hot, and we are not especially unlikely to evict some\nextremely hot buffer. This is quite bad for performance as it is, and\nit would be a lot worse if recovering from a bad eviction decision\nroutinely required rereading from disk instead of only rereading from\nthe OS buffer cache.\n\nI've sometimes wondered whether our current algorithm is just a more\nexpensive version of random eviction. I suspect that's a bit too\npessimistic, but I don't really know.\n\nI'm not saying that it isn't possible to fix this. I bet it is, and I\nhope someone does. I'm just making the point that even if we knew the\namount of kernel memory pressure and even if we also had the ability\nto add and remove shared_buffers at will, it probably wouldn't help\nmuch as things stand today, because we're not in a good position to\njudge how large the cache would need to be in order to be useful, or\nwhat we ought to be storing in it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:11:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 4/19/23 10:11, Robert Haas wrote:\n> On Tue, Apr 18, 2023 at 3:35 PM Greg Stark <stark@mit.edu> wrote:\n>> Well.... I'm more optimistic... That may not always be impossible.\n>> We've already added the ability to add more shared memory after\n>> startup. We could implement the ability to add or remove shared buffer\n>> segments after startup. And it wouldn't be crazy to imagine a kernel\n>> interface that lets us judge whether the kernel memory pressure makes\n>> it reasonable for us to take more shared buffers or makes it necessary\n>> to release shared memory to the kernel.\n> \n> On this point specifically, one fairly large problem that we have\n> currently is that our buffer replacement algorithm is terrible. In\n> workloads I've examined, either almost all buffers end up with a usage\n> count of 5 or almost all buffers end up with a usage count of 0 or 1.\n> Either way, we lose all or nearly all information about which buffers\n> are actually hot, and we are not especially unlikely to evict some\n> extremely hot buffer.\n\nThat has been my experience as well, although admittedly I have not \nlooked in quite a while.\n\n\n> I'm not saying that it isn't possible to fix this. I bet it is, and I\n> hope someone does.\n\nI keep looking at this blog post about Transparent Memory Offloading and \nthinking that we could learn from it:\n\nhttps://engineering.fb.com/2022/06/20/data-infrastructure/transparent-memory-offloading-more-memory-at-a-fraction-of-the-cost-and-power/\n\nUnfortunately, it is very Linux specific and requires a really up to \ndate OS -- cgroup v2, kernel >= 5.19\n\n> I'm just making the point that even if we knew the amount of kernel\n> memory pressure and even if we also had the ability to add and remove\n> shared_buffers at will, it probably wouldn't help much as things\n> stand today, because we're not in a good position to judge how large\n> the cache would need to be in order to be useful, or what we ought to\n> be storing in it.\n\nThe tactic TMO uses is basically to tune the available memory to get a \ntarget memory pressure. That seems like it could work.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:24:59 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-19 10:11:32 -0400, Robert Haas wrote:\n> On this point specifically, one fairly large problem that we have\n> currently is that our buffer replacement algorithm is terrible. In\n> workloads I've examined, either almost all buffers end up with a usage\n> count of 5 or almost all buffers end up with a usage count of 0 or 1.\n> Either way, we lose all or nearly all information about which buffers\n> are actually hot, and we are not especially unlikely to evict some\n> extremely hot buffer. This is quite bad for performance as it is, and\n> it would be a lot worse if recovering from a bad eviction decision\n> routinely required rereading from disk instead of only rereading from\n> the OS buffer cache.\n\nInterestingly, I haven't seen that as much in more recent benchmarks as it\nused to. Partially I think because common s_b settings have gotten bigger, I\nguess. But I wonder if we also accidentally improved something else, e.g. by\npin/unpin-ing the same buffer a bit less frequently.\n\n\n> I've sometimes wondered whether our current algorithm is just a more\n> expensive version of random eviction. I suspect that's a bit too\n> pessimistic, but I don't really know.\n\nI am quite certain that it's better than that. If you e.g. have pkey lookup\nworkload >> RAM you can actually end up seeing inner index pages staying\nreliably in s_b. But clearly we can do better.\n\n\nI do think we likely should (as IIRC Peter Geoghegan suggested) provide more\ninformation to the buffer replacement layer:\n\nIndependent of the concrete buffer replacement algorithm, the recency\ninformation we do provide is somewhat lacking. In some places we do repeated\nReadBuffer() calls for the same buffer, leading to over-inflating usagecount.\n\nWe should seriously consider using the cost of the IO into account, basically\nmaking it more likely that s_b is increased when we need to synchronously wait\nfor IO. The cost of a miss is much lower for e.g. a sequential scan or a\nbitmap heap scan, because both can do some form of prefetching. Whereas index\npages and the heap fetch for plain index scans aren't prefetchable (which\ncould be improved some, but not generally).\n\n\n> I'm not saying that it isn't possible to fix this. I bet it is, and I\n> hope someone does. I'm just making the point that even if we knew the\n> amount of kernel memory pressure and even if we also had the ability\n> to add and remove shared_buffers at will, it probably wouldn't help\n> much as things stand today, because we're not in a good position to\n> judge how large the cache would need to be in order to be useful, or\n> what we ought to be storing in it.\n\nFWIW, my experience is that linux' page replacement doesn't work very well\neither. Partially because we \"hide\" a lot of the recency information from\nit. But also just because it doesn't scale all that well to large amounts of\nmemory (there's ongoing work on that though). So I am not really convinced by\nthis argument - for plenty workloads just caching in PG will be far better\nthan caching both in the kernel and in PG, as long as some adaptiveness to\nmemory pressure avoids running into OOMs.\n\nSome forms of adaptive s_b sizing aren't particularly hard, I think. Instead\nof actually changing the s_b shmem allocation - which would be very hard in a\nprocess based model - we can tell the kernel that some parts of that memory\naren't currently in use with madvise(MADV_REMOVE). It's not quite as trivial\nas it sounds, because we'd have to free in multiple of huge_page_size.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Apr 2023 09:54:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-18 09:44:10 +1200, Thomas Munro wrote:\n> * We have no plans to turn this on by default even when the later\n> asynchronous machinery is proposed, and direct I/O starts to make more\n> economic sense (think: your stream of small reads and writes will be\n> converted to larger preadv/pwritev or moral equivalent and performed\n> ahead of time in the background). Reasons: (1) There will always be a\n> few file systems that refuse O_DIRECT (Linux tmpfs is one such, as we\n> learned in this thread; if fails with EINVAL at open() time), and (2)\n> without a page cache, you really need to size your shared_buffers\n> adequately and we can't do that automatically. It's something you'd\n> opt into for a dedicated database server along with other carefully\n> considered settings. It seems acceptable to me that if you set\n> io_direct to a non-default setting on an unusual-for-a-database-server\n> filesystem you might get errors screaming about inability to open\n> files -- you'll just have to turn it back off again if it doesn't work\n> for you.\n\nFWIW, *long* term I think it might sense to turn DIO on automatically for a\nsmall subset of operations, if supported. Examples:\n\n1) Once we have the ability to \"feed\" walsenders from wal_buffers, instead of\n going to disk, automatically using DIO for WAL might be beneficial. The\n increase in IO concurrency and reduction in latency one can get is\n substantial.\n\n2) If we make base backups use s_b if pages are in s_b, and do locking via s_b\n for non-existing pages, it might be worth automatically using DIO for the\n reads of the non-resident data, to avoid swamping the kernel page cache\n with data that won't be read again soon (and to utilize DMA etc).\n\n3) When writing back dirty data that we don't expect to be dirtied again soon,\n e.g. from vacuum ringbuffers or potentially even checkpoints, it could make\n sense to use DIO, to avoid the kernel keeping such pages in the page cache.\n\n\nBut for the main s_b, I agree, I can't forsee us turning on DIO by\ndefault. Unless somebody has tuned s_b at least some for the workload, that's\nnot going to go well. And even if somebody has, it's quite reasonable to use\nthe same host also for other programs (including other PG instances), in which\ncase it's likely desirable to be adaptive to the current load when deciding\nwhat to cache - which the kernel is in the best position to do.\n\n\n\n> If the alignment trick from c.h appears to be available but is actually\n> broken (GCC 4.2.1), then those assertions I added into smgrread() et\n> al will fail as Tom showed (yay! they did their job), or in a\n> non-assert build you'll probably get EINVAL when you try to read or\n> write from your badly aligned buffers depending on how picky your OS\n> is, but that's just an old bug in a defunct compiler that we have by\n> now written more about they ever did in their bug tracker.\n\nAgreed. If we ever find such issues in a postmordial compiler, we'll just need\nto beef up our configure test to detect that it doesn't actually fully support\nspecifying alignment.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:10:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-18 15:35:09 -0400, Greg Stark wrote:\n> On Mon, 17 Apr 2023 at 17:45, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > It's something you'd\n> > opt into for a dedicated database server along with other carefully\n> > considered settings. It seems acceptable to me that if you set\n> > io_direct to a non-default setting on an unusual-for-a-database-server\n> > filesystem you might get errors screaming about inability to open\n> > files -- you'll just have to turn it back off again if it doesn't work\n> > for you.\n> \n> If the only solution is to turn it off perhaps the server should just\n> turn it off? I guess the problem is that the shared_buffers might be\n> set assuming it would be on?\n\nI am quite strongly opposed to that - silently (or with a log message, which\npractically is the same as silently) disabling performance relevant options\nlike DIO is much more likely to cause problems, due to the drastically\ndifferent performance characteristics you get. I can see us making it\nconfigurable to try using DIO though, but I am not convinced it's worth\nbothering with that. But we'll see.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:13:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 12:54 PM Andres Freund <andres@anarazel.de> wrote:\n> Interestingly, I haven't seen that as much in more recent benchmarks as it\n> used to. Partially I think because common s_b settings have gotten bigger, I\n> guess. But I wonder if we also accidentally improved something else, e.g. by\n> pin/unpin-ing the same buffer a bit less frequently.\n\nI think the problem with the algorithm is pretty fundamental. The rate\nof usage count increase is tied to how often we access buffers, and\nthe rate of usage count decrease is tied to buffer eviction. But a\ngiven workload can have no eviction at all (in which case the usage\ncounts must converge to 5) or it can evict on every buffer access (in\nwhich case the usage counts must mostly converget to 0, because we'll\nbe decreasing usage counts at least once per buffer and generally\nmore). ISTM that the only way that you can end up with a good mix of\nusage counts is if you have a workload that falls into some kind of a\nsweet spot where the rate of usage count bumps and the rate of usage\ncount de-bumps are close enough together that things don't skew all\nthe way to one end or the other. Bigger s_b might make that more\nlikely to happen in practice, but it seems like bad algorithm design\non a theoretical level. We should be comparing the frequency of access\nof buffers to the frequency of access of other buffers, not to the\nfrequency of buffer eviction. Or to put the same thing another way,\nthe average value of the usage count shouldn't suddenly change from 5\nto 1 when the server evicts 1 buffer.\n\n> I do think we likely should (as IIRC Peter Geoghegan suggested) provide more\n> information to the buffer replacement layer:\n>\n> Independent of the concrete buffer replacement algorithm, the recency\n> information we do provide is somewhat lacking. In some places we do repeated\n> ReadBuffer() calls for the same buffer, leading to over-inflating usagecount.\n\nYeah, that would be good to fix. I don't think it solves the whole\nproblem by itself, but it seems like a good step.\n\n> We should seriously consider using the cost of the IO into account, basically\n> making it more likely that s_b is increased when we need to synchronously wait\n> for IO. The cost of a miss is much lower for e.g. a sequential scan or a\n> bitmap heap scan, because both can do some form of prefetching. Whereas index\n> pages and the heap fetch for plain index scans aren't prefetchable (which\n> could be improved some, but not generally).\n\nI guess that's reasonable if we can pass the information around well\nenough, but I still think the algorithm ought to get some fundamental\nimprovement first.\n\n> FWIW, my experience is that linux' page replacement doesn't work very well\n> either. Partially because we \"hide\" a lot of the recency information from\n> it. But also just because it doesn't scale all that well to large amounts of\n> memory (there's ongoing work on that though). So I am not really convinced by\n> this argument - for plenty workloads just caching in PG will be far better\n> than caching both in the kernel and in PG, as long as some adaptiveness to\n> memory pressure avoids running into OOMs.\n\nEven if the Linux algorithm is bad, and it may well be, the Linux\ncache is often a lot bigger than our cache. Which can cover a\nmultitude of problems.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Apr 2023 13:16:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-19 13:16:54 -0400, Robert Haas wrote:\n> On Wed, Apr 19, 2023 at 12:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > Interestingly, I haven't seen that as much in more recent benchmarks as it\n> > used to. Partially I think because common s_b settings have gotten bigger, I\n> > guess. But I wonder if we also accidentally improved something else, e.g. by\n> > pin/unpin-ing the same buffer a bit less frequently.\n> \n> I think the problem with the algorithm is pretty fundamental. The rate\n> of usage count increase is tied to how often we access buffers, and\n> the rate of usage count decrease is tied to buffer eviction. But a\n> given workload can have no eviction at all (in which case the usage\n> counts must converge to 5) or it can evict on every buffer access (in\n> which case the usage counts must mostly converget to 0, because we'll\n> be decreasing usage counts at least once per buffer and generally\n> more).\n\nI don't think the \"evict on every buffer access\" works quite that way - unless\nyou have a completely even access pattern, buffer access frequency will\nincrease usage count much more frequently on some buffers than others. And if\nyou have a completely even access pattern, it's hard to come up with a good\ncache replacement algorithm...\n\n\n> ISTM that the only way that you can end up with a good mix of\n> usage counts is if you have a workload that falls into some kind of a\n> sweet spot where the rate of usage count bumps and the rate of usage\n> count de-bumps are close enough together that things don't skew all\n> the way to one end or the other. Bigger s_b might make that more\n> likely to happen in practice, but it seems like bad algorithm design\n> on a theoretical level. We should be comparing the frequency of access\n> of buffers to the frequency of access of other buffers, not to the\n> frequency of buffer eviction. Or to put the same thing another way,\n> the average value of the usage count shouldn't suddenly change from 5\n> to 1 when the server evicts 1 buffer.\n\nI agree that there are fundamental issues with the algorithm. But practically\nI think the effect of the over-saturation of s_b isn't as severe as one might\nthink:\n\nIf your miss rate is very low, the occasional bad victim buffer selection\nwon't matter that much. If the miss rate is a bit higher, the likelihood of\nthe usagecount being increased again after being decreased is higher if a\nbuffer is accessed frequently.\n\nThis is also why I think that larger s_b makes the issues less likely - with\nlarger s_b, it is more likely that frequently accessed buffers are accessed\nagain after the first of the 5 clock sweeps necessary to reduce the usage\ncount. Clearly, with a small-ish s_b and a high replacement rate, that's not\ngoing to happen for sufficiently many buffers. But once you have a few GB of\ns_b, multiple complete sweeps take a while.\n\n\nMost, if not all, buffer replacement algorithms I have seen, don't deal well\nwith \"small SB with a huge replacement rate\". Most of the fancier algorithms\ntrack recency information for buffers that have recently been evicted - but\nyou obviously can't track that to an unlimited degree, IIRC most papers\npropose that the shadow map to be roughly equal to the buffer pool size.\n\nYou IMO pretty much need a policy decision on a higher level to improve upon\nthat (e.g. by just deciding that some buffers are sticky, perhaps because they\nwere used first) - but it doesn't matter much, because the miss rate is high\nenough that the total amount of reads is barely affected by the buffer\nreplacement decisions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:43:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 12:06:23PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sat, Apr 15, 2023 at 2:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I get the impression that we are going to need an actual runtime\n> >> test if we want to defend against this. Not entirely convinced\n> >> it's worth the trouble. Who, other than our deliberately rear-guard\n> >> buildfarm animals, is going to be building modern PG with such old\n> >> compilers? (And more especially to the point, on platforms new\n> >> enough to have working O_DIRECT?)\n> \n> > I don't think that I fully understand everything under discussion\n> > here, but I would just like to throw in a vote for trying to make\n> > failures as comprehensible as we reasonably can.\n> \n> I'm not hugely concerned about this yet. I think the reason for\n> slipping this into v16 as developer-only code is exactly that we need\n> to get a feeling for where the portability dragons live.\n\nSpeaking of the developer-only status, I find the io_direct name more enticing\nthan force_parallel_mode, which PostgreSQL renamed due to overuse from people\nexpecting non-developer benefits. Should this have a name starting with\ndebug_?\n\n\n",
"msg_date": "Sat, 29 Apr 2023 21:11:06 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 30, 2023 at 4:11 PM Noah Misch <noah@leadboat.com> wrote:\n> Speaking of the developer-only status, I find the io_direct name more enticing\n> than force_parallel_mode, which PostgreSQL renamed due to overuse from people\n> expecting non-developer benefits. Should this have a name starting with\n> debug_?\n\nHmm, yeah I think people coming from other databases would be tempted\nby it. But, unlike the\nplease-jam-a-gather-node-on-top-of-the-plan-so-I-can-debug-the-parallel-executor\nswitch, I think of this thing more like an experimental feature that\nis just waiting for more features to make it useful. What about a\nwarning message about that at startup if it's on?\n\n\n",
"msg_date": "Sun, 30 Apr 2023 18:35:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 30, 2023 at 6:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Apr 30, 2023 at 4:11 PM Noah Misch <noah@leadboat.com> wrote:\n> > Speaking of the developer-only status, I find the io_direct name more enticing\n> > than force_parallel_mode, which PostgreSQL renamed due to overuse from people\n> > expecting non-developer benefits. Should this have a name starting with\n> > debug_?\n>\n> Hmm, yeah I think people coming from other databases would be tempted\n> by it. But, unlike the\n> please-jam-a-gather-node-on-top-of-the-plan-so-I-can-debug-the-parallel-executor\n> switch, I think of this thing more like an experimental feature that\n> is just waiting for more features to make it useful. What about a\n> warning message about that at startup if it's on?\n\nSomething like this? Better words welcome.\n\n$ ~/install//bin/postgres -D pgdata -c io_direct=data\n2023-05-01 09:44:37.460 NZST [99675] LOG: starting PostgreSQL 16devel\non x86_64-unknown-freebsd13.2, compiled by FreeBSD clang version\n14.0.5 (https://github.com/llvm/llvm-project.git\nllvmorg-14.0.5-0-gc12386ae247c), 64-bit\n2023-05-01 09:44:37.460 NZST [99675] LOG: listening on IPv6 address\n\"::1\", port 5432\n2023-05-01 09:44:37.460 NZST [99675] LOG: listening on IPv4 address\n\"127.0.0.1\", port 5432\n2023-05-01 09:44:37.461 NZST [99675] LOG: listening on Unix socket\n\"/tmp/.s.PGSQL.5432\"\n2023-05-01 09:44:37.463 NZST [99675] WARNING: io_direct is an\nexperimental setting for developer testing only\n2023-05-01 09:44:37.463 NZST [99675] HINT: File I/O may be\ninefficient or not work on some file systems.\n2023-05-01 09:44:37.465 NZST [99678] LOG: database system was shut\ndown at 2023-05-01 09:43:51 NZST\n2023-05-01 09:44:37.468 NZST [99675] LOG: database system is ready to\naccept connections",
"msg_date": "Mon, 1 May 2023 10:11:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Sun, Apr 30, 2023 at 06:35:30PM +1200, Thomas Munro wrote:\n> On Sun, Apr 30, 2023 at 4:11 PM Noah Misch <noah@leadboat.com> wrote:\n> > Speaking of the developer-only status, I find the io_direct name more enticing\n> > than force_parallel_mode, which PostgreSQL renamed due to overuse from people\n> > expecting non-developer benefits. Should this have a name starting with\n> > debug_?\n> \n> Hmm, yeah I think people coming from other databases would be tempted\n> by it. But, unlike the\n> please-jam-a-gather-node-on-top-of-the-plan-so-I-can-debug-the-parallel-executor\n> switch, I think of this thing more like an experimental feature that\n> is just waiting for more features to make it useful. What about a\n> warning message about that at startup if it's on?\n\nSuch a warning wouldn't be particularly likely to be seen by someone who\nalready didn't read/understand the docs for the not-feature that they\nturned on.\n\nSince this is -currently- a developer-only feature, it seems reasonable\nto rename the GUC to debug_direct_io, and (at such time as it's\nconsidered to be helpful to users) later rename it to direct_io. \nThat avoids the issue that random advice to enable direct_io=x under\nv17+ is applied by people running v16. +0.8 to do so.\n\nMaybe in the future, it should be added to GUC_EXPLAIN, too ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 30 Apr 2023 18:50:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Apr 30, 2023 at 06:35:30PM +1200, Thomas Munro wrote:\n>> What about a\n>> warning message about that at startup if it's on?\n\n> Such a warning wouldn't be particularly likely to be seen by someone who\n> already didn't read/understand the docs for the not-feature that they\n> turned on.\n\nYeah, I doubt that that would be helpful at all.\n\n> Since this is -currently- a developer-only feature, it seems reasonable\n> to rename the GUC to debug_direct_io, and (at such time as it's\n> considered to be helpful to users) later rename it to direct_io.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 30 Apr 2023 20:00:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, May 1, 2023 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Apr 30, 2023 at 06:35:30PM +1200, Thomas Munro wrote:\n> >> What about a\n> >> warning message about that at startup if it's on?\n>\n> > Such a warning wouldn't be particularly likely to be seen by someone who\n> > already didn't read/understand the docs for the not-feature that they\n> > turned on.\n>\n> Yeah, I doubt that that would be helpful at all.\n\nFair enough.\n\n> > Since this is -currently- a developer-only feature, it seems reasonable\n> > to rename the GUC to debug_direct_io, and (at such time as it's\n> > considered to be helpful to users) later rename it to direct_io.\n>\n> +1\n\nYeah, the future cross-version confusion thing is compelling. OK,\nhere's a rename patch. I left all the variable names and related\nsymbols as they were, so it's just the GUC that gains the prefix. I\nmoved the documentation hunk up to be sorted alphabetically like\nnearby entries, because that seemed to look nicer, even though the\nlist isn't globally sorted.",
"msg_date": "Mon, 1 May 2023 14:47:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 7:35 AM Greg Stark <stark@mit.edu> wrote:\n> On Mon, 17 Apr 2023 at 17:45, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > (2) without a page cache, you really need to size your shared_buffers\n> > adequately and we can't do that automatically.\n>\n> Well.... I'm more optimistic... That may not always be impossible.\n> We've already added the ability to add more shared memory after\n> startup. We could implement the ability to add or remove shared buffer\n> segments after startup.\n\nYeah, there are examples of systems going back decades with multiple\nbuffer pools. In some you can add more space later, and in some you\ncan also configure pools with different block sizes (imagine if you\ncould set your extremely OLTP tables to use 4KB blocks for reduced\nwrite amplification and then perhaps even also promise that your\nstorage doesn't need FPIs for that size because you know it's\nperfectly safe™, and imagine if you could set some big write-only\nhistory tables to use 32KB blocks because some compression scheme\nworks better, etc), and you might also want different cache\nreplacement algorithms in different pools. Complex and advanced stuff\nno doubt and I'm not suggesting that's anywhere near a reasonable\nthing for us to think about now (as a matter of fact in another thread\nyou can find me arguing for fully unifying our existing segregated\nSLRU buffer pools with the one true buffer pool), but since we're\ntalking pie-in-the-sky ideas around the water cooler...\n\n\n",
"msg_date": "Thu, 4 May 2023 12:10:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, May 01, 2023 at 02:47:57PM +1200, Thomas Munro wrote:\n> On Mon, May 1, 2023 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > Since this is -currently- a developer-only feature, it seems reasonable\n> > > to rename the GUC to debug_direct_io, and (at such time as it's\n> > > considered to be helpful to users) later rename it to direct_io.\n> >\n> > +1\n> \n> Yeah, the future cross-version confusion thing is compelling. OK,\n> here's a rename patch.\n\nThis looks reasonable.\n\n\n",
"msg_date": "Sun, 14 May 2023 14:09:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Mon, May 15, 2023 at 9:09 AM Noah Misch <noah@leadboat.com> wrote:\n> This looks reasonable.\n\nPushed with a small change: a couple of GUC_check_errdetail strings\nneeded s/io_direct/debug_io_direct/. Thanks.\n\n\n",
"msg_date": "Mon, 15 May 2023 11:25:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On 01.05.23 04:47, Thomas Munro wrote:\n> On Mon, May 1, 2023 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> On Sun, Apr 30, 2023 at 06:35:30PM +1200, Thomas Munro wrote:\n>>>> What about a\n>>>> warning message about that at startup if it's on?\n>>\n>>> Such a warning wouldn't be particularly likely to be seen by someone who\n>>> already didn't read/understand the docs for the not-feature that they\n>>> turned on.\n>>\n>> Yeah, I doubt that that would be helpful at all.\n> \n> Fair enough.\n> \n>>> Since this is -currently- a developer-only feature, it seems reasonable\n>>> to rename the GUC to debug_direct_io, and (at such time as it's\n>>> considered to be helpful to users) later rename it to direct_io.\n>>\n>> +1\n> \n> Yeah, the future cross-version confusion thing is compelling. OK,\n> here's a rename patch. I left all the variable names and related\n> symbols as they were, so it's just the GUC that gains the prefix. I\n> moved the documentation hunk up to be sorted alphabetically like\n> nearby entries, because that seemed to look nicer, even though the\n> list isn't globally sorted.\n\nI suggest to also rename the hook functions (check and assign), like in \nthe attached patch. Mainly because utils/guc_hooks.h says to order the \nfunctions by GUC variable name, which was already wrong under the old \nname, but it would be pretty confusing to sort the functions by their \nGUC name that doesn't match the function names.",
"msg_date": "Tue, 22 Aug 2023 14:15:34 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 10:43 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't think the \"evict on every buffer access\" works quite that way - unless\n> you have a completely even access pattern, buffer access frequency will\n> increase usage count much more frequently on some buffers than others. And if\n> you have a completely even access pattern, it's hard to come up with a good\n> cache replacement algorithm...\n\nMy guess is that the most immediate problem in this area is the\nproblem of \"correlated references\" (to use the term from the LRU-K\npaper). I gave an example of that here:\n\nhttps://postgr.es/m/CAH2-Wzk7T9K3d9_NY+jEXp2qQGMYoP=gZMoR8q1Cv57SxAw1OA@mail.gmail.com\n\nIn other words, I think that the most immediate problem may in fact be\nthe tendency of usage_count to get incremented multiple times in\nresponse to what is (for all intents and purposes) the same logical\npage access. Even if it's not as important as I imagine, it still\nseems likely that verifying that our input information isn't garbage\nis the logical place to begin work in this general area. It's\ndifficult to be sure about that because it's so hard to look at just\none problem in isolation. I suspect that you were right to point out\nthat a larger shared buffers tends to look quite different to a\nsmaller shared buffers. That same factor is going to complicate any\nanalysis of the specific problem that I've highlighted (to say nothing\nof the way that contention complicates the picture).\n\nThere is an interesting paper that compared the hit rates seen for\nTPC-C to TPC-E on relatively modern hardware:\n\nhttps://www.cs.cmu.edu/~chensm/papers/TPCE-sigmod-record10.pdf\n\nIt concludes that the buffer misses for each workload look rather\nsimilar, past a certain point (past a certain buffer pool size): both\nworkloads have cache misses that seem totally random. The access\npatterns may be very different, but that doesn't necessarily have any\nvisible effect on buffer misses. At least provided that you make\ncertain modest assumptions about buffer pool size, relative to working\nset size.\n\nThe most sophisticated cache management algorithms (like ARC) work by\nmaintaining metadata about recently evicted buffers, which is used to\ndecide whether to favor recency over frequency. If you work backwards\nthen it follows that having cache misses that look completely random\nis desirable, and perhaps even something to work towards. What you\nreally don't want is a situation where the same small minority of\npages keep getting ping-ponged into and out of the buffer pool,\nwithout ever settling, even though the buffer cache is large enough\nthat that's possible in principle. That pathological profile is the\nfurthest possible thing from random.\n\nWith smaller shared_buffers, it's perhaps inevitable that buffer cache\nmisses are random, and so I'd expect that managing the problem of\ncontention will tend to matter most. With larger shared_buffers it\nisn't inevitable at all, so the quality of the cache eviction scheme\nis likely to matter quite a bit more.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 22 Aug 2023 13:48:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 12:15 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I suggest to also rename the hook functions (check and assign), like in\n> the attached patch. Mainly because utils/guc_hooks.h says to order the\n> functions by GUC variable name, which was already wrong under the old\n> name, but it would be pretty confusing to sort the functions by their\n> GUC name that doesn't match the function names.\n\nOK. I'll push this tomorrow unless you do it while I'm asleep. Thanks!\n\n\n",
"msg_date": "Wed, 23 Aug 2023 16:57:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O"
}
] |
[
{
"msg_contents": "Part of the work that Thomas mentions in [1], regarding Direct I/O,\nhas certain requirements that pointers must be page-aligned.\n\nI've attached a patch which implements palloc_aligned() and\nMemoryContextAllocAligned() which accept an 'alignto' parameter which\nmust be a power-of-2 value. The memory addresses returned by these\nfunctions will be aligned by the requested alignment.\n\nPrimarily, this work is by Andres. I took what he had and cleaned it\nup, fixed a few minor bugs then implemented repalloc() and\nGetMemoryChunkSpace() functionality.\n\nThe way this works is that palloc_aligned() can be called for any of\nthe existing MemoryContext types. What we do is perform a normal\nallocation request, but we add additional bytes to the size request to\nallow the proper alignment of the pointer that we return. Since we\nhave no control over the alignment of the return value from the\nallocation requests, we must adjust the pointer returned by the\nallocation function to align it to the required alignment.\n\nWhen an operation such as pfree() or repalloc() is performed on a\npointer retuned by palloc_aligned(), we can't go trying to pfree the\naligned pointer as this is not the pointer that was returned by the\nallocation function. To make all this work, another MemoryChunk\nstruct exists directly before the aligned pointer which has the\nMemoryContextMethodID set to MCTX_ALIGNED_REDIRECT_ID. These\n\"redirection\" MemoryChunks have the \"block offset\" set to allow the\nactual MemoryChunk of the original allocation to be found. We just\nsubtract the number of bytes stored in the block offset, which is just\nthe same as how we now find the owning AllocBlock from the MemoryChunk\nin aset.c. Once we do that offset calculation, we can just pfree()\nthe original chunk.\n\nThe 'alignto' is stored in this \"redirection\" MemoryChunk so that\nrepalloc() knows what the original alignment request was so that it\nthe repalloc'd chunk can be aligned by that amount too.\n\nIn the attached patch, there are not yet any users of these new 2\nfunctions. As mentioned above, Thomas is proposing some patches to\nimplement Direct I/O in [1] which will use these functions. Because I\ntouched memory contexts last, it likely makes the most sense for me to\nwork on this portion of the patch.\n\nComments welcome.\n\nPatch attached.\n\n(I did rip out all the I/O specific portions from Andres' patch which\nThomas proposes as his 0002 patch. Thomas will need to rebase\n(sorry)).\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CA+hUKGK1X532hYqJ_MzFWt0n1zt8trz980D79WbjwnT-yYLZpg@mail.gmail.com",
"msg_date": "Wed, 2 Nov 2022 00:28:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add palloc_aligned() to allow arbitrary power of 2 memory alignment"
},
{
"msg_contents": "Hi!\n\nPart of the work that Thomas mentions in [1], regarding Direct I/O,\n> has certain requirements that pointers must be page-aligned.\n>\n> I've attached a patch which implements palloc_aligned() and\n> MemoryContextAllocAligned() ...\n>\nI've done a quick look and the patch is looks good to me.\nLet's add tests for these functions, should we? If you think this is an\noverkill, feel free to trim tests for your taste.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 3 Nov 2022 18:20:47 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 00:28:46 +1300, David Rowley wrote:\n> Part of the work that Thomas mentions in [1], regarding Direct I/O,\n> diff --git a/src/backend/utils/mmgr/alignedalloc.c b/src/backend/utils/mmgr/alignedalloc.c\n> new file mode 100644\n> index 0000000000..e581772758\n> --- /dev/null\n> +++ b/src/backend/utils/mmgr/alignedalloc.c\n> @@ -0,0 +1,93 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * alignedalloc.c\n> + *\t Allocator functions to implement palloc_aligned\n\nFWIW, to me this is really more part of mcxt.c than its own\nallocator... Particularly because MemoryContextAllocAligned() et al are\nimplemented there.\n\n\n> +void *\n> +AlignedAllocRealloc(void *pointer, Size size)\n\nI doubtthere's ever a need to realloc such a pointer? Perhaps we could just\nelog(ERROR)?\n\n\n> +/*\n> + * MemoryContextAllocAligned\n> + *\t\tAllocate 'size' bytes of memory in 'context' aligned to 'alignto'\n> + *\t\tbytes.\n> + *\n> + * 'flags' may be 0 or set the same as MemoryContextAllocExtended().\n> + * 'alignto' must be a power of 2.\n> + */\n> +void *\n> +MemoryContextAllocAligned(MemoryContext context,\n> +\t\t\t\t\t\t Size size, Size alignto, int flags)\n> +{\n> +\tSize\t\talloc_size;\n> +\tvoid\t *unaligned;\n> +\tvoid\t *aligned;\n> +\n> +\t/* wouldn't make much sense to waste that much space */\n> +\tAssert(alignto < (128 * 1024 * 1024));\n> +\n> +\t/* ensure alignto is a power of 2 */\n> +\tAssert((alignto & (alignto - 1)) == 0);\n\nHm, not that I can see a case for ever not using a power of two\nalignment... There's not really a \"need\" for the restriction, right? Perhaps\nwe should note that?\n\n\n> +\t/*\n> +\t * We implement aligned pointers by simply allocating enough memory for\n> +\t * the requested size plus the alignment and an additional MemoryChunk.\n> +\t * This additional MemoryChunk is required for operations such as pfree\n> +\t * when used on the pointer returned by this function. We use this\n> +\t * \"redirection\" MemoryChunk in order to find the pointer to the memory\n> +\t * that was returned by the MemoryContextAllocExtended call below. We do\n> +\t * that by \"borrowing\" the block offset field and instead of using that to\n> +\t * find the offset into the owning block, we use it to find the original\n> +\t * allocated address.\n> +\t *\n> +\t * Here we must allocate enough extra memory so that we can still align\n> +\t * the pointer returned by MemoryContextAllocExtended and also have enough\n> +\t * space for the redirection MemoryChunk.\n> +\t */\n> +\talloc_size = size + alignto + sizeof(MemoryChunk);\n> +\n> +\t/* perform the actual allocation */\n> +\tunaligned = MemoryContextAllocExtended(context, alloc_size, flags);\n\nShould we handle the case where we get a suitably aligned pointer from\nMemoryContextAllocExtended() differently?\n\n\n> +\t/* XXX: should we adjust valgrind state here? */\n\nProbably still need to do this... Kinda hard to get right without the code\ngetting exercised. Wonder if there's some minimal case we could actually use\nit for?\n\nThanks,\n\nAndres\n\n\n",
"msg_date": "Mon, 7 Nov 2022 08:24:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "Thanks for having a look.\n\nOn Tue, 8 Nov 2022 at 05:24, Andres Freund <andres@anarazel.de> wrote:\n> FWIW, to me this is really more part of mcxt.c than its own\n> allocator... Particularly because MemoryContextAllocAligned() et al are\n> implemented there.\n\nI'm on the fence about this one. I thought it was nice that we have a\nfile per consumed MemoryContextMethodID. The thing that caused me to\nadd alignedalloc.c was the various other comments in the declaration\nof mcxt_methods[] that mention where to find each of the methods being\nassigned in that array (e.g /* aset.c */). I can change it back if\nyou feel strongly. I just don't.\n\n> I doubtthere's ever a need to realloc such a pointer? Perhaps we could just\n> elog(ERROR)?\n\nAre you maybe locked into just thinking about your current planned use\ncase that we want to allocate BLCKSZ bytes in each case? It does not\nseem impossible to me that someone will want something more than an\n8-byte alignment and also might want to enlarge the allocation at some\npoint. I thought it might be more dangerous not to implement repalloc.\nIt might not be clear to someone using palloc_aligned() that there's\nno code path that can call repalloc on the returned pointer.\n\n> Hm, not that I can see a case for ever not using a power of two\n> alignment... There's not really a \"need\" for the restriction, right? Perhaps\n> we should note that?\n\nTYPEALIGN() will not work correctly unless the alignment is a power of\n2. We could modify it to, but that would require doing some modular\nmaths instead of bitmasking. That seems pretty horrible when the macro\nis given a value that's not constant at compile time as we'd end up\nwith a (slow) divide in the code path. I think the restriction is a\ngood idea. I imagine there will never be any need to align to anything\nthat's not a power of 2.\n\n> Should we handle the case where we get a suitably aligned pointer from\n> MemoryContextAllocExtended() differently?\n\nMaybe it would be worth the extra check. I'm trying to imagine future\nuse cases. Maybe if someone wanted to ensure that we're aligned to\nCPU cache line boundaries then the chances of the pointer already\nbeing aligned to 64 bytes is decent enough. The problem is it that\nit's too late to save any memory, it just saves a bit of boxing and\nunboxing of the redirect headers.\n\n> > + /* XXX: should we adjust valgrind state here? */\n>\n> Probably still need to do this... Kinda hard to get right without the code\n> getting exercised.\n\nYeah, that comment kept catching my eye. I agree. That should be\nhandled correctly. I'll work on that.\n\n> Wonder if there's some minimal case we could actually use\n> it for?\n\nIs there anything we could align to CPU cacheline size that would\nspeed something up?\n\nDavid\n\n\n",
"msg_date": "Tue, 8 Nov 2022 14:57:35 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Fri, 4 Nov 2022 at 04:20, Maxim Orlov <orlovmg@gmail.com> wrote:\n> I've done a quick look and the patch is looks good to me.\n> Let's add tests for these functions, should we? If you think this is an overkill, feel free to trim tests for your taste.\n\nThanks for doing that. I'm keen to wait a bit and see if we can come\nup with a core user of this before adding a test module. However, if\nwe keep the repalloc() implementation, then I wonder if it might be\nworth having a test module for that. I see you're not testing it in\nthe one you've written. Andres has suggested I remove the repalloc\nstuff I added but see my reply to that. I think we should keep it\nbased on the fact that someone using palloc_aligned might have no idea\nif some other code path can call repalloc() on the returned pointer.\n\nDavid\n\n\n",
"msg_date": "Tue, 8 Nov 2022 15:01:18 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 8:57 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> Is there anything we could align to CPU cacheline size that would\n> speed something up?\n\nInitCatCache() already has this, which could benefit from simpler notation.\n\n/*\n * Allocate a new cache structure, aligning to a cacheline boundary\n *\n * Note: we rely on zeroing to initialize all the dlist headers correctly\n */\nsz = sizeof(CatCache) + PG_CACHE_LINE_SIZE;\ncp = (CatCache *) CACHELINEALIGN(palloc0(sz));\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Nov 8, 2022 at 8:57 AM David Rowley <dgrowleyml@gmail.com> wrote:> Is there anything we could align to CPU cacheline size that would> speed something up?InitCatCache() already has this, which could benefit from simpler notation./* * Allocate a new cache structure, aligning to a cacheline boundary * * Note: we rely on zeroing to initialize all the dlist headers correctly */sz = sizeof(CatCache) + PG_CACHE_LINE_SIZE;cp = (CatCache *) CACHELINEALIGN(palloc0(sz));--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Nov 2022 09:17:46 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-08 15:01:18 +1300, David Rowley wrote:\n> Andres has suggested I remove the repalloc stuff I added but see my reply to\n> that.\n\nI'm fine with keeping it, I just couldn't really think of cases that have\nstrict alignment requirements but also requires resizing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Nov 2022 18:25:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Tue, 8 Nov 2022 at 14:57, David Rowley <dgrowleyml@gmail.com> wrote:\n> On Tue, 8 Nov 2022 at 05:24, Andres Freund <andres@anarazel.de> wrote:\n> > Should we handle the case where we get a suitably aligned pointer from\n> > MemoryContextAllocExtended() differently?\n>\n> Maybe it would be worth the extra check. I'm trying to imagine future\n> use cases. Maybe if someone wanted to ensure that we're aligned to\n> CPU cache line boundaries then the chances of the pointer already\n> being aligned to 64 bytes is decent enough. The problem is it that\n> it's too late to save any memory, it just saves a bit of boxing and\n> unboxing of the redirect headers.\n\nThinking about that a bit more, if we keep the repalloc support then\nwe can't do this as if we happen to get the right alignment by chance\nduring the palloc_aligned, then if we don't have the redirection\nMemoryChunk, then we've no way to ensure we keep the alignment after a\nrepalloc.\n\nDavid\n\n\n",
"msg_date": "Tue, 8 Nov 2022 16:53:49 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Tue, 8 Nov 2022 at 15:17, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Tue, Nov 8, 2022 at 8:57 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Is there anything we could align to CPU cacheline size that would\n> > speed something up?\n>\n> InitCatCache() already has this, which could benefit from simpler notation.\n\nThanks. I wasn't aware. I'll convert that to use palloc_aligned in the patch.\n\nDavid\n\n\n",
"msg_date": "Tue, 8 Nov 2022 16:54:25 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 8:57 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 8 Nov 2022 at 05:24, Andres Freund <andres@anarazel.de> wrote:\n> > I doubtthere's ever a need to realloc such a pointer? Perhaps we could\njust\n> > elog(ERROR)?\n>\n> Are you maybe locked into just thinking about your current planned use\n> case that we want to allocate BLCKSZ bytes in each case? It does not\n> seem impossible to me that someone will want something more than an\n> 8-byte alignment and also might want to enlarge the allocation at some\n> point. I thought it might be more dangerous not to implement repalloc.\n> It might not be clear to someone using palloc_aligned() that there's\n> no code path that can call repalloc on the returned pointer.\n\nI can imagine a use case for arrays of cacheline-sized objects.\n\n> TYPEALIGN() will not work correctly unless the alignment is a power of\n> 2. We could modify it to, but that would require doing some modular\n> maths instead of bitmasking. That seems pretty horrible when the macro\n> is given a value that's not constant at compile time as we'd end up\n> with a (slow) divide in the code path. I think the restriction is a\n> good idea. I imagine there will never be any need to align to anything\n> that's not a power of 2.\n\n+1\n\n> > Should we handle the case where we get a suitably aligned pointer from\n> > MemoryContextAllocExtended() differently?\n>\n> Maybe it would be worth the extra check. I'm trying to imagine future\n> use cases. Maybe if someone wanted to ensure that we're aligned to\n> CPU cache line boundaries then the chances of the pointer already\n> being aligned to 64 bytes is decent enough. The problem is it that\n> it's too late to save any memory, it just saves a bit of boxing and\n> unboxing of the redirect headers.\n\nTo my mind the main point of detecting this case is to save memory, so if\nthat's not possible/convenient, special-casing doesn't seem worth it.\n\n- Assert((char *) chunk > (char *) block);\n+ Assert((char *) chunk >= (char *) block);\n\nIs this related or independent?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Nov 8, 2022 at 8:57 AM David Rowley <dgrowleyml@gmail.com> wrote:> On Tue, 8 Nov 2022 at 05:24, Andres Freund <andres@anarazel.de> wrote:> > I doubtthere's ever a need to realloc such a pointer? Perhaps we could just> > elog(ERROR)?>> Are you maybe locked into just thinking about your current planned use> case that we want to allocate BLCKSZ bytes in each case? It does not> seem impossible to me that someone will want something more than an> 8-byte alignment and also might want to enlarge the allocation at some> point. I thought it might be more dangerous not to implement repalloc.> It might not be clear to someone using palloc_aligned() that there's> no code path that can call repalloc on the returned pointer.I can imagine a use case for arrays of cacheline-sized objects.> TYPEALIGN() will not work correctly unless the alignment is a power of> 2. We could modify it to, but that would require doing some modular> maths instead of bitmasking. That seems pretty horrible when the macro> is given a value that's not constant at compile time as we'd end up> with a (slow) divide in the code path. I think the restriction is a> good idea. I imagine there will never be any need to align to anything> that's not a power of 2.+1> > Should we handle the case where we get a suitably aligned pointer from> > MemoryContextAllocExtended() differently?>> Maybe it would be worth the extra check. I'm trying to imagine future> use cases. Maybe if someone wanted to ensure that we're aligned to> CPU cache line boundaries then the chances of the pointer already> being aligned to 64 bytes is decent enough. The problem is it that> it's too late to save any memory, it just saves a bit of boxing and> unboxing of the redirect headers.To my mind the main point of detecting this case is to save memory, so if that's not possible/convenient, special-casing doesn't seem worth it. -\tAssert((char *) chunk > (char *) block);+\tAssert((char *) chunk >= (char *) block);Is this related or independent?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 14 Nov 2022 09:25:32 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-08 14:57:35 +1300, David Rowley wrote:\n> On Tue, 8 Nov 2022 at 05:24, Andres Freund <andres@anarazel.de> wrote:\n> > Should we handle the case where we get a suitably aligned pointer from\n> > MemoryContextAllocExtended() differently?\n> \n> Maybe it would be worth the extra check. I'm trying to imagine future\n> use cases. Maybe if someone wanted to ensure that we're aligned to\n> CPU cache line boundaries then the chances of the pointer already\n> being aligned to 64 bytes is decent enough. The problem is it that\n> it's too late to save any memory, it just saves a bit of boxing and\n> unboxing of the redirect headers.\n\nCouldn't we reduce the amount of over-allocation by a small amount by special\ncasing the already-aligned case? That's not going to be relevant for page size\naligne allocations, but for smaller alignment values it could matter.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:11:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Tue, 15 Nov 2022 at 11:11, Andres Freund <andres@anarazel.de> wrote:\n> Couldn't we reduce the amount of over-allocation by a small amount by special\n> casing the already-aligned case? That's not going to be relevant for page size\n> aligne allocations, but for smaller alignment values it could matter.\n\nI don't quite follow this. How can we know the allocation is already\naligned without performing the allocation? To perform the allocation\nwe must tell palloc what size to allocate. So, we've already wasted\nthe space by the time we can tell if the allocation is aligned to what\nwe need.\n\nAside from that, there's already a special case for alignto <=\nMAXIMUM_ALIGNOF. But we know no palloc will ever return anything\naligned less than that in all cases, which is why that can work.\n\nDavid\n\n\n",
"msg_date": "Tue, 15 Nov 2022 23:36:53 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Mon, 14 Nov 2022 at 15:25, John Naylor <john.naylor@enterprisedb.com> wrote:\n> - Assert((char *) chunk > (char *) block);\n> + Assert((char *) chunk >= (char *) block);\n>\n> Is this related or independent?\n\nIt's related. Because the code is doing:\n\nMemoryChunkSetHdrMask(alignedchunk, unaligned, alignto,\n MCTX_ALIGNED_REDIRECT_ID);\n\nHere the blockoffset gets set to the difference between alignedchunk\nand unaligned. Typically when we call MemoryChunkSetHdrMask, the\nblockoffset is always the difference between the block and\nMemoryChunk, which is never 0 due to the block header fields. Here it\ncan be the same pointer when the redirection MemoryChunk is stored on\nthe first byte of the palloc'd address. This can happen if the\naddress returned by palloc + sizeof(MemoryChunk) is aligned to what we\nneed already.\n\nDavid\n\n\n",
"msg_date": "Tue, 15 Nov 2022 23:46:17 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 23:36:53 +1300, David Rowley wrote:\n> On Tue, 15 Nov 2022 at 11:11, Andres Freund <andres@anarazel.de> wrote:\n> > Couldn't we reduce the amount of over-allocation by a small amount by special\n> > casing the already-aligned case? That's not going to be relevant for page size\n> > aligne allocations, but for smaller alignment values it could matter.\n> \n> I don't quite follow this. How can we know the allocation is already\n> aligned without performing the allocation? To perform the allocation\n> we must tell palloc what size to allocate. So, we've already wasted\n> the space by the time we can tell if the allocation is aligned to what\n> we need.\n\nWhat I mean is that we perhaps could over-allocate by a bit less than\n alignto + sizeof(MemoryChunk)\nIf the value returned by the underlying memory context is already aligned to\nthe correct value, we can just return it as-is.\n\nWe already rely on memory context returning MAXIMUM_ALIGNOF aligned\nallocations. Adding the special case, I think, means that the we could safely\nover-allocate by \"only\"\n alignto + sizeof(MemoryChunk) - MAXIMUM_ALIGNOF\n\nWhich would be a reasonable win for small allocations with a small >\nMAXIMUM_ALIGNOF alignment. But I don't think that'll be a very common case?\n\n\n> Aside from that, there's already a special case for alignto <=\n> MAXIMUM_ALIGNOF. But we know no palloc will ever return anything\n> aligned less than that in all cases, which is why that can work.\n\nYep.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 11:19:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "So I think it's kind of cute that you've implemented these as agnostic\nwrappers that work with any allocator ... but why?\n\nI would have expected the functionality to just be added directly to\nthe allocator to explicitly request whole aligned pages which IIRC\nit's already capable of doing but just doesn't have any way to\nexplicitly request.\n\nDirectIO doesn't really need a wide variety of allocation sizes or\nalignments, it's always going to be the physical block size which\napparently can be as low as 512 bytes but I'm guessing we're always\ngoing to be using 4kB alignment and multiples of 8kB allocations.\nWouldn't just having a pool of 8kB pages all aligned on 4kB or 8kB\nalignment be simpler and more efficient than working around misaligned\npointers and having all these branches and arithmetic happening?\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:58:10 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 16:58:10 -0500, Greg Stark wrote:\n> So I think it's kind of cute that you've implemented these as agnostic\n> wrappers that work with any allocator ... but why?\n> \n> I would have expected the functionality to just be added directly to\n> the allocator to explicitly request whole aligned pages which IIRC\n> it's already capable of doing but just doesn't have any way to\n> explicitly request.\n\nWe'd need to support it in multiple allocators, and they'd code quite similar\nto this because for allocations that go directly to malloc.\n\nIt's possible that we'd want to add optional support for aligned allocations\nto e.g. aset.c but not other allocators - this patch would allow to add\nsupport for that transparently.\n\n\n> DirectIO doesn't really need a wide variety of allocation sizes or\n> alignments, it's always going to be the physical block size which\n> apparently can be as low as 512 bytes but I'm guessing we're always\n> going to be using 4kB alignment and multiples of 8kB allocations.\n\nYep - I posted numbers in some other thread showing that using a larger\nalignment is a good idea.\n\n\n> Wouldn't just having a pool of 8kB pages all aligned on 4kB or 8kB\n> alignment be simpler and more efficient than working around misaligned\n> pointers and having all these branches and arithmetic happening?\n\nI'm quite certain it'd increase memory usage, rather than reduce it - there's\nnot actually a whole lot of places that need aligned pages outside of bufmgr,\nso the pool would just be unused most of the time. And you'd need special code\nto return those pages to the pool when the operation using the aligned buffer\nfails - whereas integrating with memory contexts already takes care of\nthat. Lastly, there's other places where we can benefit from aligned\nallocations far smaller than 4kB (most typically cacheline aligned, I'd\nguess).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 15:12:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 08:19, Andres Freund <andres@anarazel.de> wrote:\n> We already rely on memory context returning MAXIMUM_ALIGNOF aligned\n> allocations. Adding the special case, I think, means that the we could safely\n> over-allocate by \"only\"\n> alignto + sizeof(MemoryChunk) - MAXIMUM_ALIGNOF\n>\n> Which would be a reasonable win for small allocations with a small >\n> MAXIMUM_ALIGNOF alignment. But I don't think that'll be a very common case?\n\nSeems reasonable. Subtracting MAXIMUM_ALIGNOF doesn't add any\nadditional run-time cost since it will be constant folded with the\nsizeof(MemoryChunk).\n\nI've attached an updated patch. The 0002 is just intended to exercise\nthese allocations a little bit, it's not intended for commit. I was\nusing that to ensure valgrind does not complain about anything. It\nseems happy now.\n\nDavid",
"msg_date": "Wed, 16 Nov 2022 23:56:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 23:56, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached an updated patch. The 0002 is just intended to exercise\n> these allocations a little bit, it's not intended for commit. I was\n> using that to ensure valgrind does not complain about anything. It\n> seems happy now.\n\nAfter making some comment adjustments and having adjusted the\ncalculation of how to get the old chunk size when doing repalloc() on\nan aligned chunk, I've now pushed this.\n\nThank you for the reviews.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Dec 2022 13:34:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add palloc_aligned() to allow arbitrary power of 2 memory\n alignment"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a patch adding regression tests for \\g and \\o, and TAP tests\nfor \\g | program,\n\nIt's a follow up to the discussion at [1]. Since this discussion\nalready has a slot in the CF [2] with a committed patch, let's start a\nnew separate thread.\n\n[1]\nhttps://www.postgresql.org/message-id/4333844c-2244-4d6e-a49a-1d483fbe304f@manitou-mail.org\n\n[2] https://commitfest.postgresql.org/40/3923/\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Tue, 01 Nov 2022 12:42:47 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Tests for psql \\g and \\o"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 12:42:47PM +0100, Daniel Verite wrote:\n> It's a follow up to the discussion at [1]. Since this discussion\n> already has a slot in the CF [2] with a committed patch, let's start a\n> new separate thread.\n\n+psql_like($node, \"SELECT 'one' \\\\g | cat >$g_file\", qr//, \"one command \\\\g\");\n+my $c1 = slurp_file($g_file);\n+like($c1, qr/one/);\n\nWindows may not have an equivalent for \"cat\", no? Note that psql's\n001_basic.pl has no restriction in place for Windows. Perhaps you\ncould use the same trick as basebackup_to_shell, where GZIP is used to\nwrite some arbitrary data.. Anyway, this has some quoting issues\nespecially if the file's path has whitespaces? This is located in\nFile::Temp::tempdir, still it does not sound like a good thing to rely\non this assumption on portability grounds.\n--\nMichael",
"msg_date": "Thu, 10 Nov 2022 13:37:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tests for psql \\g and \\o"
},
{
"msg_contents": "Michael Paquier wrote:\n\n> +psql_like($node, \"SELECT 'one' \\\\g | cat >$g_file\", qr//, \"one command\n> \\\\g\");\n> +my $c1 = slurp_file($g_file);\n> +like($c1, qr/one/);\n> \n> Windows may not have an equivalent for \"cat\", no? Note that psql's\n> 001_basic.pl has no restriction in place for Windows. Perhaps you\n> could use the same trick as basebackup_to_shell, where GZIP is used to\n> write some arbitrary data.. Anyway, this has some quoting issues\n> especially if the file's path has whitespaces? This is located in\n> File::Temp::tempdir, still it does not sound like a good thing to rely\n> on this assumption on portability grounds.\n\nPFA a new patch addressing these issues.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Wed, 23 Nov 2022 21:18:57 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Tests for psql \\g and \\o"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 09:18:57PM +0100, Daniel Verite wrote:\n> PFA a new patch addressing these issues.\n\nThanks, the tests part of the main regression test suite look good to\nme, so I have applied them after fixing a few typos and tweaking the\nstyle of the test. Regarding the tests with pipes, I had cold feet\nwith the dependencies on cat for non-WIN32 or findstr for WIN32. cat\nis used in the kerberos and ldap tests, though I am wondering whether\nwe shouldn't take an approach similar to other tests where the command\nmay not exist, and where we should check if there is something in the\nenvironment..\n--\nMichael",
"msg_date": "Wed, 30 Nov 2022 14:50:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tests for psql \\g and \\o"
},
{
"msg_contents": "\tMichael Paquier wrote:\n\n> Thanks, the tests part of the main regression test suite look good to\n> me, so I have applied them after fixing a few typos and tweaking the\n> style of the test.\n\nThanks!\n\n> Regarding the tests with pipes, I had cold feet with the\n> dependencies on cat for non-WIN32 or findstr for WIN32.\n\nOK. If the issue is that these programs might be missing, I guess\nwe could check that beforehand with IPC::Run and skip the\ncorresponding psql tests if they're not available or not working\nas expected.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 30 Nov 2022 19:22:42 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Tests for psql \\g and \\o"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 02:50:16PM +0900, Michael Paquier wrote:\n> On Wed, Nov 23, 2022 at 09:18:57PM +0100, Daniel Verite wrote:\n> > PFA a new patch addressing these issues.\n> \n> Thanks, the tests part of the main regression test suite look good to\n> me, so I have applied them after fixing a few typos and tweaking the\n> style of the test. Regarding the tests with pipes, I had cold feet\n> with the dependencies on cat for non-WIN32 or findstr for WIN32.\n\nI think you could do that with a perl 0-liner.\n\n$ echo foo |perl -pe ''\nfoo\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 30 Nov 2022 12:33:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Tests for psql \\g and \\o"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 12:33:59PM -0600, Justin Pryzby wrote:\n> I think you could do that with a perl 0-liner.\n\nRight. And this could do something similar to\n025_stuck_on_old_timeline.pl in terms of finding the binary for perl.\n--\nMichael",
"msg_date": "Thu, 1 Dec 2022 08:46:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tests for psql \\g and \\o"
}
] |
[
{
"msg_contents": "Dear developer:\r\nThe patch submitted addresses #17663 in the pgsql-bugs@lists.postgresql.org list.\r\nProblem: Add the parameters --enable-debug and --enable-cassert when the database is compiled. Driven by jdbc, the stored procedure containing rollbck is called, and an assertion occurs.\r\nCause of the problem: Driven by jdbc, in the function BuildCachedPlan, the CachedPlan memory context is generated to save the execution plan (plan) of the input SQL. If the stored procedure contains rollback, call the function ReleaseCachedPlan to release the CachedPlan memory context. Therefore, before the function pgss_store collects statistical information, it is necessary to retain the stmt_location and stmt_len data required in pstmt, which will not be released by the cCachedPlan memory context, resulting in random values for the parameters required by the function pgss_store.?",
"msg_date": "Tue, 1 Nov 2022 12:57:00 +0000",
"msg_from": "=?gb2312?B?1dTG5Lnw?= <zhaoqg45023@hundsun.com>",
"msg_from_op": true,
"msg_subject": "BUG #17663:Connect to the database through jdbc, call the stored\n procedure containing the rollback statement,the database triggers an\n assertion, and the database is in recovery mode."
},
{
"msg_contents": "=?gb2312?B?1dTG5Lnw?= <zhaoqg45023@hundsun.com> writes:\n> The patch submitted addresses #17663 in the pgsql-bugs@lists.postgresql.org list.\n> Problem: Add the parameters --enable-debug and --enable-cassert when the database is compiled. Driven by jdbc, the stored procedure containing rollbck is called, and an assertion occurs.\n> Cause of the problem: Driven by jdbc, in the function BuildCachedPlan, the CachedPlan memory context is generated to save the execution plan (plan) of the input SQL. If the stored procedure contains rollback, call the function ReleaseCachedPlan to release the CachedPlan memory context. Therefore, before the function pgss_store collects statistical information, it is necessary to retain the stmt_location and stmt_len data required in pstmt, which will not be released by the cCachedPlan memory context, resulting in random values for the parameters required by the function pgss_store.?\n\nIndeed ... thanks for the patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Nov 2022 12:01:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17663:Connect to the database through jdbc,\n call the stored procedure containing the rollback statement,the database\n triggers an assertion, and the database is in recovery mode."
}
] |
[
{
"msg_contents": "Hi,\n\nTom pinged me privately because mylodon, an animal enforcing C89/C99\ncompatibility, was failed. This is due to perl on the machine being upgraded\nto perl 5.36.\n\nMylodon was failing because of:\n\nconfigure:18839: ccache clang-13 -c -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers -Wno-array-bounds -std=c99 -Wc11-extensions -Werror=c11-extensions -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/lib/x86_64-linux-gnu/perl/5.36/CORE conftest.c >&5\nIn file included from conftest.c:170:\nIn file included from /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/perl.h:5777:\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/thread.h:386:8: error: '_Thread_local' is a C11 extension [-Werror,-Wc11-extensions]\nextern PERL_THREAD_LOCAL void *PL_current_context;\n ^\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/config.h:5154:27: note: expanded from macro 'PERL_THREAD_LOCAL'\n#define PERL_THREAD_LOCAL _Thread_local /**/\n ^\n1 error generated.\n\n\nI.e. perl's headers use C11 features, which unsurprisingly doesn't work when\nusing -Wc11-extensions -Werror=c11-extensions.\n\nFor now I worked around this by disabling perl for mylodon, but that's\nobviously not a great fix.\n\n\nperl 5.36 also causes a bunch of warnings locally, where I obviously don't\nuse -Wc11-extensions -Werror=c11-extensions:\n\n-Wdeclaration-after-statement produces a few copies of:\n[1767/2259 42 78%] Compiling C object src/pl/plperl/plperl.so.p/meson-generated_.._SPI.c.o\nIn file included from /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/perl.h:7242,\n from ../../../../home/andres/src/postgresql/src/pl/plperl/plperl.h:82,\n from ../../../../home/andres/src/postgresql/src/pl/plperl/SPI.xs:15:\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/inline.h: In function ‘Perl_cop_file_avn’:\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/inline.h:3489:5: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n 3489 | const char *file = CopFILE(cop);\n | ^~~~~\n\nAnd -Wshadow=compatible-local triggers the following, very verbose, warning:\n\n[1767/2259 42 78%] Compiling C object src/pl/plperl/plperl.so.p/meson-generated_.._SPI.c.o\n...\nIn file included from /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/perl.h:4155:\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h: In function ‘Perl_newSV_type’:\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:97:35: warning: declaration of ‘p_’ shadows a previous local [-Wshadow=compatible-local]\n 97 | # define MUTABLE_PTR(p) ({ void *p_ = (p); p_; })\n | ^~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv.h:1394:54: note: in definition of macro ‘SvSTASH_set’\n 1394 | (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END\n | ^~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:105:32: note: in expansion of macro ‘MUTABLE_PTR’\n 105 | #define MUTABLE_HV(p) ((HV *)MUTABLE_PTR(p))\n | ^~~~~~~~~~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h:487:29: note: in expansion of macro ‘MUTABLE_HV’\n 487 | SvSTASH_set(io, MUTABLE_HV(SvREFCNT_inc(GvHV(iogv))));\n | ^~~~~~~~~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:107:32: note: in expansion of macro ‘MUTABLE_PTR’\n 107 | #define MUTABLE_SV(p) ((SV *)MUTABLE_PTR(p))\n | ^~~~~~~~~~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv.h:346:59: note: in expansion of macro ‘MUTABLE_SV’\n 346 | #define SvREFCNT_inc(sv) Perl_SvREFCNT_inc(MUTABLE_SV(sv))\n | ^~~~~~~~~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h:487:40: note: in expansion of macro ‘SvREFCNT_inc’\n 487 | SvSTASH_set(io, MUTABLE_HV(SvREFCNT_inc(GvHV(iogv))));\n | ^~~~~~~~~~~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:97:35: note: shadowed declaration is here\n 97 | # define MUTABLE_PTR(p) ({ void *p_ = (p); p_; })\n | ^~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv.h:1394:54: note: in definition of macro ‘SvSTASH_set’\n 1394 | (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END\n | ^~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:105:32: note: in expansion of macro ‘MUTABLE_PTR’\n 105 | #define MUTABLE_HV(p) ((HV *)MUTABLE_PTR(p))\n | ^~~~~~~~~~~\n/usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h:487:29: note: in expansion of macro ‘MUTABLE_HV’\n 487 | SvSTASH_set(io, MUTABLE_HV(SvREFCNT_inc(GvHV(iogv))));\n | ^~~~~~~~~~\n\n\nI don't know how much longer we can rely on headers being\n-Wdeclaration-after-statement clean, my impression is that people don't have a\nlot of patience for C89isms anymore.\n\nI suspect the shadowing issue might get fixed if we report it, there've been a\nbunch of fixes around that not too long ago.\n\n\nI wonder if we should try to use -isystem for a bunch of external\ndependencies. That way we can keep the more aggressive warnings with a lower\nlikelihood of conflicting with stuff outside of our control.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 1 Nov 2022 11:01:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> Tom pinged me privately because mylodon, an animal enforcing C89/C99\n> compatibility, was failed. This is due to perl on the machine being upgraded\n> to perl 5.36.\n>\n> Mylodon was failing because of:\n>\n> configure:18839: ccache clang-13 -c -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers -Wno-array-bounds -std=c99 -Wc11-extensions -Werror=c11-extensions -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/lib/x86_64-linux-gnu/perl/5.36/CORE conftest.c >&5\n> In file included from conftest.c:170:\n> In file included from /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/perl.h:5777:\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/thread.h:386:8: error: '_Thread_local' is a C11 extension [-Werror,-Wc11-extensions]\n> extern PERL_THREAD_LOCAL void *PL_current_context;\n> ^\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/config.h:5154:27: note: expanded from macro 'PERL_THREAD_LOCAL'\n> #define PERL_THREAD_LOCAL _Thread_local /**/\n> ^\n> 1 error generated.\n>\n>\n> I.e. perl's headers use C11 features, which unsurprisingly doesn't work when\n> using -Wc11-extensions -Werror=c11-extensions.\n\nLike Postgres, Perl only requires C99, so any newer features such as\n_Thread_local are conditional on compiler support (probed by Configure).\n\nWe're not actively testing the fallbacks at the moment, but I'll look at\nadding a CI job with the appropriate -Werror flags to make sure it\ndoesn't break in future.\n\n> For now I worked around this by disabling perl for mylodon, but that's\n> obviously not a great fix.\n\nAn option would be to build a custom perl with the same -Werror flags\nmylodon uses for Postgres (via the -Accflags option to Configure), and\nthen building Postgres against that.\n\n> perl 5.36 also causes a bunch of warnings locally, where I obviously don't\n> use -Wc11-extensions -Werror=c11-extensions:\n>\n> -Wdeclaration-after-statement produces a few copies of:\n> [1767/2259 42 78%] Compiling C object src/pl/plperl/plperl.so.p/meson-generated_.._SPI.c.o\n> In file included from /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/perl.h:7242,\n> from ../../../../home/andres/src/postgresql/src/pl/plperl/plperl.h:82,\n> from ../../../../home/andres/src/postgresql/src/pl/plperl/SPI.xs:15:\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/inline.h: In function ‘Perl_cop_file_avn’:\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/inline.h:3489:5: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n> 3489 | const char *file = CopFILE(cop);\n> | ^~~~~\n>\n> I don't know how much longer we can rely on headers being\n> -Wdeclaration-after-statement clean, my impression is that people don't have a\n> lot of patience for C89isms anymore.\n\nPerl's C99 policy (https://perldoc.perl.org/perlhacktips#C99) explicitly\npermits mixed declarations and code, so I don't think that's likely to\nchange.\n\n> And -Wshadow=compatible-local triggers the following, very verbose, warning:\n>\n> [1767/2259 42 78%] Compiling C object src/pl/plperl/plperl.so.p/meson-generated_.._SPI.c.o\n> ...\n> In file included from /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/perl.h:4155:\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h: In function ‘Perl_newSV_type’:\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:97:35: warning:\n> declaration of ‘p_’ shadows a previous local [-Wshadow=compatible-local]\n> 97 | # define MUTABLE_PTR(p) ({ void *p_ = (p); p_; })\n> | ^~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv.h:1394:54: note: in definition of macro ‘SvSTASH_set’\n> 1394 | (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END\n> | ^~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:105:32: note: in expansion of macro ‘MUTABLE_PTR’\n> 105 | #define MUTABLE_HV(p) ((HV *)MUTABLE_PTR(p))\n> | ^~~~~~~~~~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h:487:29: note: in expansion of macro ‘MUTABLE_HV’\n> 487 | SvSTASH_set(io, MUTABLE_HV(SvREFCNT_inc(GvHV(iogv))));\n> | ^~~~~~~~~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:107:32: note: in expansion of macro ‘MUTABLE_PTR’\n> 107 | #define MUTABLE_SV(p) ((SV *)MUTABLE_PTR(p))\n> | ^~~~~~~~~~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv.h:346:59: note: in expansion of macro ‘MUTABLE_SV’\n> 346 | #define SvREFCNT_inc(sv) Perl_SvREFCNT_inc(MUTABLE_SV(sv))\n> | ^~~~~~~~~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h:487:40: note: in expansion of macro ‘SvREFCNT_inc’\n> 487 | SvSTASH_set(io, MUTABLE_HV(SvREFCNT_inc(GvHV(iogv))));\n> | ^~~~~~~~~~~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:97:35: note: shadowed declaration is here\n> 97 | # define MUTABLE_PTR(p) ({ void *p_ = (p); p_; })\n> | ^~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv.h:1394:54: note: in definition of macro ‘SvSTASH_set’\n> 1394 | (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END\n> | ^~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/handy.h:105:32: note: in expansion of macro ‘MUTABLE_PTR’\n> 105 | #define MUTABLE_HV(p) ((HV *)MUTABLE_PTR(p))\n> | ^~~~~~~~~~~\n> /usr/lib/x86_64-linux-gnu/perl/5.36/CORE/sv_inline.h:487:29: note: in expansion of macro ‘MUTABLE_HV’\n> 487 | SvSTASH_set(io, MUTABLE_HV(SvREFCNT_inc(GvHV(iogv))));\n> | ^~~~~~~~~~\n>\n> I suspect the shadowing issue might get fixed if we report it, there've been a\n> bunch of fixes around that not too long ago.\n\nThis one might be a bit tricky to fix. The root cause is the MUTABLE_PTR\nmacro, which is meant to allow casting between different pointer types\nwithout accientally losing constness, which (when GCC brace groups are\nsupported) is defined as:\n\n# define MUTABLE_PTR(p) ({ void *p_ = (p); p_; })\n\nAnd then we have:\n\n#define MUTABLE_xV(p) ((xV *)MUTABLE_PTR(p))\n\netc. for all the different value types (AV, GV, HV, SV, etc.)\n\nIn the above case, the SvREFCNT_inc() inside the MUTABLE_HV() expands to\nsomething that contains a MUTABLE_SV() call, causing the inner `p_`\nvariable to shadow the outer one.\n\n> I wonder if we should try to use -isystem for a bunch of external\n> dependencies. That way we can keep the more aggressive warnings with a lower\n> likelihood of conflicting with stuff outside of our control.\n\nThat is worth considering, at least if the above can't easily be fixed,\nor if we run across more dependencies with similar problems.\n\n> Greetings,\n>\n> Andres Freund\n\n- ilmari\n\n\n",
"msg_date": "Tue, 01 Nov 2022 18:55:43 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "On 01.11.22 19:01, Andres Freund wrote:\n> I don't know how much longer we can rely on headers being\n> -Wdeclaration-after-statement clean, my impression is that people don't have a\n> lot of patience for C89isms anymore.\n\n> I wonder if we should try to use -isystem for a bunch of external\n> dependencies. That way we can keep the more aggressive warnings with a lower\n> likelihood of conflicting with stuff outside of our control.\n\nPython has the same issues. There are a few other Python-embedding \nprojects that use -Wdeclaration-after-statement and complain if the \nPython headers violate it. But it's getting tedious. -isystem would be \na better solution.\n\n\n\n",
"msg_date": "Tue, 1 Nov 2022 17:00:27 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-01 17:00:27 -0400, Peter Eisentraut wrote:\n> On 01.11.22 19:01, Andres Freund wrote:\n> > I don't know how much longer we can rely on headers being\n> > -Wdeclaration-after-statement clean, my impression is that people don't have a\n> > lot of patience for C89isms anymore.\n> \n> > I wonder if we should try to use -isystem for a bunch of external\n> > dependencies. That way we can keep the more aggressive warnings with a lower\n> > likelihood of conflicting with stuff outside of our control.\n> \n> Python has the same issues. There are a few other Python-embedding projects\n> that use -Wdeclaration-after-statement and complain if the Python headers\n> violate it. But it's getting tedious. -isystem would be a better solution.\n\nWhich dependencies should we convert to -isystem? And I assume we should do so\nwith meson and autoconf? It's easy with meson, it provides a function to\nchange a dependency to use -isystem without knowing how the compiler spells\nthat. I guess with autoconf we'd have to check if the compiler understands\n-isystem?\n\nThe other alternative would be to drop -Wdeclaration-after-statement :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Nov 2022 16:43:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-01 17:00:27 -0400, Peter Eisentraut wrote:\n>> Python has the same issues. There are a few other Python-embedding projects\n>> that use -Wdeclaration-after-statement and complain if the Python headers\n>> violate it. But it's getting tedious. -isystem would be a better solution.\n\n> Which dependencies should we convert to -isystem?\n\nColor me confused about what's being discussed here. I see nothing\nin the gcc manual suggesting that -isystem has any effect on warning\nlevels?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Nov 2022 19:57:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99,\n -Wdeclaration-after-statement -Wshadow=compatible-local"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 19:57:45 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-01 17:00:27 -0400, Peter Eisentraut wrote:\n> >> Python has the same issues. There are a few other Python-embedding projects\n> >> that use -Wdeclaration-after-statement and complain if the Python headers\n> >> violate it. But it's getting tedious. -isystem would be a better solution.\n> \n> > Which dependencies should we convert to -isystem?\n> \n> Color me confused about what's being discussed here. I see nothing\n> in the gcc manual suggesting that -isystem has any effect on warning\n> levels?\n\nIt's only indirectly explained :(\n\n The -isystem and -idirafter options also mark the directory as a system directory, so that it gets the same special treatment that is applied to\n the standard system directories.\n\nand then https://gcc.gnu.org/onlinedocs/cpp/System-Headers.html\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Nov 2022 17:03:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 17:03:34 -0700, Andres Freund wrote:\n> On 2022-11-02 19:57:45 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2022-11-01 17:00:27 -0400, Peter Eisentraut wrote:\n> > >> Python has the same issues. There are a few other Python-embedding projects\n> > >> that use -Wdeclaration-after-statement and complain if the Python headers\n> > >> violate it. But it's getting tedious. -isystem would be a better solution.\n> >\n> > > Which dependencies should we convert to -isystem?\n> >\n> > Color me confused about what's being discussed here. I see nothing\n> > in the gcc manual suggesting that -isystem has any effect on warning\n> > levels?\n>\n> It's only indirectly explained :(\n>\n> The -isystem and -idirafter options also mark the directory as a system directory, so that it gets the same special treatment that is applied to\n> the standard system directories.\n>\n> and then https://gcc.gnu.org/onlinedocs/cpp/System-Headers.html\n\nThe attached *prototype* patch is a slightly different spin on the idea of\nusing -isystem: It adds a\n #pragma GCC system_header\nto plperl.h if supported by the compiler. That also avoids warnings from\nwithin plperl and subsidiary headers.\n\nI don't really have an opinion about whether using the pragma or -isystem is\npreferrable. I chose the pragma because it makes it easier to grep for headers\nwhere we chose to do this.\n\n\nI added the pragma detection only to the meson build, but if others think this\nis a good way to go, I'll do the necessary autoconf wrangling as well.\n\n\nIn the compiler test, I chose to not check whether -Werror=unknown-pragmas is\nsupported - it appears to be an old gcc flag, and the worst outcome is that\nHAVE_PRAGMA_SYSTEM_HEADER isn't defined.\n\nWe could alternatively define HAVE_PRAGMA_SYSTEM_HEADER or such based on\n__GNUC__ being defined.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 28 Dec 2022 10:24:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The attached *prototype* patch is a slightly different spin on the idea of\n> using -isystem: It adds a\n> #pragma GCC system_header\n> to plperl.h if supported by the compiler. That also avoids warnings from\n> within plperl and subsidiary headers.\n\n> I don't really have an opinion about whether using the pragma or -isystem is\n> preferrable. I chose the pragma because it makes it easier to grep for headers\n> where we chose to do this.\n\nThis seems like a reasonable answer. It feels quite a bit less magic\nin the way that it suppresses warnings than -isystem, and also less\nlikely to have unexpected side-effects (I have a nasty feeling that\n-isystem is more magic on macOS than elsewhere). So far it seems\nlike only the Perl headers have much of an issue, though I can\nforesee Python coming along soon.\n\n> In the compiler test, I chose to not check whether -Werror=unknown-pragmas is\n> supported - it appears to be an old gcc flag, and the worst outcome is that\n> HAVE_PRAGMA_SYSTEM_HEADER isn't defined.\n> We could alternatively define HAVE_PRAGMA_SYSTEM_HEADER or such based on\n> __GNUC__ being defined.\n\nHmm ... I guess the buildfarm would tell us whether that detection works\ncorrectly on platforms where it matters. Let's keep it simple if we\ncan.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Dec 2022 13:43:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99,\n -Wdeclaration-after-statement -Wshadow=compatible-local"
},
{
"msg_contents": "On 2022-12-28 13:43:27 -0500, Tom Lane wrote:\n> > In the compiler test, I chose to not check whether -Werror=unknown-pragmas is\n> > supported - it appears to be an old gcc flag, and the worst outcome is that\n> > HAVE_PRAGMA_SYSTEM_HEADER isn't defined.\n> > We could alternatively define HAVE_PRAGMA_SYSTEM_HEADER or such based on\n> > __GNUC__ being defined.\n> \n> Hmm ... I guess the buildfarm would tell us whether that detection works\n> correctly on platforms where it matters. Let's keep it simple if we\n> can.\n\nQuick clarification question: Are you suggesting to use #ifdef __GNUC__, or\nthat it suffices to use -Werror=unknown-pragmas without a separate configure\ncheck?\n\n\n",
"msg_date": "Wed, 28 Dec 2022 16:02:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-28 13:43:27 -0500, Tom Lane wrote:\n>> Hmm ... I guess the buildfarm would tell us whether that detection works\n>> correctly on platforms where it matters. Let's keep it simple if we\n>> can.\n\n> Quick clarification question: Are you suggesting to use #ifdef __GNUC__, or\n> that it suffices to use -Werror=unknown-pragmas without a separate configure\n> check?\n\nI'd try -Werror=unknown-pragmas, and then go to the #ifdef if that\ndoesn't seem to work well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Dec 2022 19:05:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99,\n -Wdeclaration-after-statement -Wshadow=compatible-local"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 19:05:35 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-12-28 13:43:27 -0500, Tom Lane wrote:\n> >> Hmm ... I guess the buildfarm would tell us whether that detection works\n> >> correctly on platforms where it matters. Let's keep it simple if we\n> >> can.\n> \n> > Quick clarification question: Are you suggesting to use #ifdef __GNUC__, or\n> > that it suffices to use -Werror=unknown-pragmas without a separate configure\n> > check?\n> \n> I'd try -Werror=unknown-pragmas, and then go to the #ifdef if that\n> doesn't seem to work well.\n\nIt turns out to not work terribly well. gcc, quite reasonably, warns about the\npragma used in .c files, and there's no easy way that I found to have autoconf\nname its test .h. We could include a test header in the compile test, but that\nalso adds some complication. As gcc has supported the pragma since 2000, I\nthink a simple\n #ifdef __GNUC__\n #define HAVE_PRAGMA_SYSTEM_HEADER\t1\n #endif\nshould suffice.\n\n\nI started to wonder if what we should do instead is to do something like\n\n#ifdef HAVE_PRAGMA_GCC_DIAGNOSTIC\n#pragma GCC diagnostic push\n#pragma GCC diagnostic ignored \"-Wdeclaration-after-statement\"\n#pragma GCC diagnostic ignored \"-Wshadow=compatible-local\"\n#endif\n\n#include \"EXTERN.h\"\n#include \"perl.h\"\n\n#ifdef HAVE_PRAGMA_GCC_DIAGNOSTIC\n#pragma GCC diagnostic pop\n#endif\n\nbut that ends up quite complicated because gcc will warn about unknown\nwarnings being ignored:\n\n../../../../home/andres/src/postgresql/src/pl/plperl/plperl.h:87:32: warning: unknown option after ‘#pragma GCC diagnostic’ kind [-Wpragmas]\n 87 | #pragma GCC diagnostic ignored \"-Wfrakbar\"\n\nwhich would mean we'd need to define a pg_config.h symbol for each potentially\nignored warning, and to guard each '#pragma GCC diagnostic ignored \"...\"' with\nan #ifdef.\n\n\nThus I propose the attached.\n\n\nShould we backpatch this? Given the volume of warnings it's probably a good\nidea. But I'd let it step in HEAD for a few days of buildfarm coverage first.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 29 Dec 2022 10:42:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It turns out to not work terribly well. gcc, quite reasonably, warns about the\n> pragma used in .c files, and there's no easy way that I found to have autoconf\n> name its test .h. We could include a test header in the compile test, but that\n> also adds some complication. As gcc has supported the pragma since 2000, I\n> think a simple\n> #ifdef __GNUC__\n> #define HAVE_PRAGMA_SYSTEM_HEADER\t1\n> #endif\n> should suffice.\n\nWe might find that some GCC-impostor compilers have trouble with it,\nbut if so we can adjust the #ifdef here.\n\nGetting nitpicky, I suggest calling it \"HAVE_PRAGMA_GCC_SYSTEM_HEADER\"\nto align better with what you actually have to write. Also:\n\n+ * Newer versions the perl headers trigger a lot of warnings with our compiler\n\n\"Newer versions of ...\" please. Otherwise LGTM.\n\n> Should we backpatch this? Given the volume of warnings it's probably a good\n> idea. But I'd let it step in HEAD for a few days of buildfarm coverage first.\n\n+1 to both points.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 13:51:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99,\n -Wdeclaration-after-statement -Wshadow=compatible-local"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-29 13:51:37 -0500, Tom Lane wrote:\n> We might find that some GCC-impostor compilers have trouble with it,\n> but if so we can adjust the #ifdef here.\n\nYea. I suspect it's widely enough used that any compiler claiming to be gcc\ncompatible has it, but ...\n\n\n> Getting nitpicky, I suggest calling it \"HAVE_PRAGMA_GCC_SYSTEM_HEADER\"\n> to align better with what you actually have to write.\n\nMakes sense.\n\n\n> + * Newer versions the perl headers trigger a lot of warnings with our compiler\n> \n> \"Newer versions of ...\" please. Otherwise LGTM.\n\nOops.\n\n\n> > Should we backpatch this? Given the volume of warnings it's probably a good\n> > idea. But I'd let it step in HEAD for a few days of buildfarm coverage first.\n> \n> +1 to both points.\n\nPushed to HEAD.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Dec 2022 13:40:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-29 13:40:13 -0800, Andres Freund wrote:\n> > > Should we backpatch this? Given the volume of warnings it's probably a good\n> > > idea. But I'd let it step in HEAD for a few days of buildfarm coverage first.\n> > \n> > +1 to both points.\n> \n> Pushed to HEAD.\n\nI haven't seen any problems in HEAD, so I'm working on backpatching.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Jan 2023 15:46:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "On 2023-01-02 15:46:36 -0800, Andres Freund wrote:\n> On 2022-12-29 13:40:13 -0800, Andres Freund wrote:\n> > > > Should we backpatch this? Given the volume of warnings it's probably a good\n> > > > idea. But I'd let it step in HEAD for a few days of buildfarm coverage first.\n> > > \n> > > +1 to both points.\n> > \n> > Pushed to HEAD.\n> \n> I haven't seen any problems in HEAD, so I'm working on backpatching.\n\nAnd done.\n\n\n",
"msg_date": "Mon, 2 Jan 2023 17:21:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: perl 5.36, C99, -Wdeclaration-after-statement\n -Wshadow=compatible-local"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-02 15:46:36 -0800, Andres Freund wrote:\n>> I haven't seen any problems in HEAD, so I'm working on backpatching.\n\n> And done.\n\nIt occurs to me that we should now be able to drop configure's\nprobe for -Wno-compound-token-split-by-macro, since that was only\nneeded to suppress warnings in the Perl headers. Won't save much\nof course, but every test we can get rid of is worth doing IMO.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 10:48:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99,\n -Wdeclaration-after-statement -Wshadow=compatible-local"
},
{
"msg_contents": "I wrote:\n> It occurs to me that we should now be able to drop configure's\n> probe for -Wno-compound-token-split-by-macro, since that was only\n> needed to suppress warnings in the Perl headers.\n\n... or not. A bit of experimentation says that they still come out,\napparently because the warnings are triggered by *use* of relevant\nPerl macros not by their *definitions*. Oh well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 11:02:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: perl 5.36, C99,\n -Wdeclaration-after-statement -Wshadow=compatible-local"
}
] |
[
{
"msg_contents": "Hey,\n\nRecent threads have pointed out some long-standing doc language in initdb\nthat could be made more precise, especially in light of the relatively\nrecent addition of a glossary. Toward this end I'm attaching a patch that\ndefines three terms: \"bootstrap superuser\", \"database superuser\" and\n\"superuser\". I didn't add any extra-glossary links for the later two but\ndid for the limited-in-scope bootstrap superuser that is really only\ndefined in initdb (actually, I suspect the authorization docs could use a\nlink too but haven't gone looking for an appropriate place yet).\n\nIn passing I also changed a few places where the documentation says\n\"database\" when the thing being referred to is basically the file system\ndata directory, which is a cluster-scoped thing.\n\nI did some grep'ing, though another pass or two is probably worthwhile.\nFor now I submit a preliminary patch for consideration and buy-in before\ntrying to polish it up.\n\nDavid J.",
"msg_date": "Tue, 1 Nov 2022 15:47:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Glossary and initdb definition work for \"superuser\" and\n database/cluster"
},
{
"msg_contents": "On Tue, Nov 01, 2022 at 03:47:15PM -0700, David G. Johnston wrote:\n> Hey,\n> \n> Recent threads have pointed out some long-standing doc language in initdb\n> that could be made more precise, especially in light of the relatively\n> recent addition of a glossary. Toward this end I'm attaching a patch that\n> defines three terms: \"bootstrap superuser\", \"database superuser\" and\n> \"superuser\". I didn't add any extra-glossary links for the later two but\n> did for the limited-in-scope bootstrap superuser that is really only\n> defined in initdb (actually, I suspect the authorization docs could use a\n> link too but haven't gone looking for an appropriate place yet).\n> \n> In passing I also changed a few places where the documentation says\n> \"database\" when the thing being referred to is basically the file system\n> data directory, which is a cluster-scoped thing.\n> \n> I did some grep'ing, though another pass or two is probably worthwhile.\n> For now I submit a preliminary patch for consideration and buy-in before\n> trying to polish it up.\n\nI think this is wrong:\n\n| https://www.postgresql.org/docs/devel/app-initdb.html\n| -U username\n| --username=username\n| \n| Selects the user name of the database superuser. This defaults to\n| the name of the effective user running initdb [...]\n\nIt's true that the user who runs initdb is typically named \"postgres\",\nbut that's only by convention.\n\n>+ This user owns all system catalog tables in each database. It also is the role\n>+ from which all granted permission originate. Because of these things this\n>+ role may not be dropped.\n\nplural permissions\n\nthese comma\n\n>+ While the <glossterm linkend=\"glossary-bootstrap-superuser\">bootstrap superuser</glossterm> is\n>+ a database superuser it has special obligations and restrictions that plain database superusers do not.\n\ncomma it\n\n>+ <glossentry id=\"glossary-superuser\">\n>+ <glossterm>Superuser</glossterm>\n>+ <glossdef>\n>+ <para>\n>+ As used in this documentation it is a synonym for\n\ncomma it\n\n> Creating a database cluster consists of creating the directories in\n>- which the database data will live, generating the shared catalog\n>+ which the cluster data will live, generating the shared catalog\n\n+1\n\n> tables (tables that belong to the whole cluster rather than to any\n>- particular database), and creating the <literal>postgres</literal>,\n>- <literal>template1</literal>, and <literal>template0</literal> databases.\n>+ particular database), creating the <literal>postgres</literal>,\n>+ <literal>template1</literal>, and <literal>template0</literal> databases,\n>+ and creating the\n>+ <glossterm linkend=\"glossary-bootstrap-superuser\">boostrap superuser</glossterm>\n>+ (<literal>postgres</literal>, by default).\n\n\"postgres\" is wrong\n\n> For security reasons the new cluster created by <command>initdb</command>\n>- will only be accessible by the cluster owner by default. The\n>+ will only be accessible by the cluster user by default. The\n\nI prefer \"cluster owner\"\n\n> <command>initdb</command>, but you can avoid writing it by\n> setting the <envar>PGDATA</envar> environment variable, which\n> can be convenient since the database server\n>- (<command>postgres</command>) can find the database\n>+ (<command>postgres</command>) can find the data\n> directory later by the same variable.\n\n+1\n\n>- Makes <command>initdb</command> read the database superuser's password\n>+ Makes <command>initdb</command> read the bootstrap superuser's password\n> from a file. The first line of the file is taken as the password.\n\n+1\n\n>- Safely write all database files to disk and exit. This does not\n>+ Safely write all database cluster files to disk and exit. This does not\n\n+1\n\n> It may be useful to adjust this size to control the granularity of\n>- WAL log shipping or archiving. Also, in databases with a high volume\n>+ WAL log shipping or archiving. Also, in clusters with a high volume\n> of WAL, the sheer number of WAL files per directory can become a\n\n+1\n\n\n",
"msg_date": "Tue, 1 Nov 2022 19:20:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Glossary and initdb definition work for \"superuser\" and\n database/cluster"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 5:20 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Nov 01, 2022 at 03:47:15PM -0700, David G. Johnston wrote:\n>\n>\n> I think this is wrong:\n>\n> | https://www.postgresql.org/docs/devel/app-initdb.html\n> | -U username\n> | --username=username\n> |\n> | Selects the user name of the database superuser. This defaults to\n> | the name of the effective user running initdb [...]\n>\n> It's true that the user who runs initdb is typically named \"postgres\",\n> but that's only by convention.\n>\n\nThanks. I feel bad for missing this one given that I've been working on\nfixing up the default libpq user name wording.\n\n\n>\n> >+ This user owns all system catalog tables in each database. It also\n> is the role\n> >+ from which all granted permission originate. Because of these\n> things this\n> >+ role may not be dropped.\n>\n> plural permissions\n>\n+1\n\n\n>\n> these comma\n>\n\nthings comma actually (+0.5)\n\n\n> >+ While the <glossterm\n> linkend=\"glossary-bootstrap-superuser\">bootstrap superuser</glossterm> is\n> >+ a database superuser it has special obligations and restrictions\n> that plain database superusers do not.\n>\n> comma it\n>\n\n+ 0.5\n\n>\n> > tables (tables that belong to the whole cluster rather than to any\n> >- particular database), and creating the <literal>postgres</literal>,\n> >- <literal>template1</literal>, and <literal>template0</literal>\n> databases.\n> >+ particular database), creating the <literal>postgres</literal>,\n> >+ <literal>template1</literal>, and <literal>template0</literal>\n> databases,\n> >+ and creating the\n> >+ <glossterm linkend=\"glossary-bootstrap-superuser\">boostrap\n> superuser</glossterm>\n> >+ (<literal>postgres</literal>, by default).\n>\n> \"postgres\" is wrong\n>\n\nYep, will give this another look to see if anywhere but the actual option\ndescription wants to cover how this really works (or maybe just point the\nreader there).\n\n\n> > For security reasons the new cluster created by\n> <command>initdb</command>\n> >- will only be accessible by the cluster owner by default. The\n> >+ will only be accessible by the cluster user by default. The\n>\n> I prefer \"cluster owner\"\n>\n\nI'll either need to change it back or fix the one in the next sentence...\n\nI'm still leaning toward continuing to use cluster user like everywhere\nelse on the page instead of adding a new term. The fact that this doesn't\nwork on Windows makes having it in the description section at all\narguable. I'd rather rewrite it something like:\n\n\"On POSIX systems, the resulting data directory, and all of its contents,\nwill have permissions of 700, though you can use --allow-group-access to\ninstead get 750. In either case, the effective user running initdb will\nbecome the owner and group for the files created within the data directory.\"\n\n(I haven't tried to prove this owner:group dynamic, but having 700 or 750\nand specifying the alternative does result in the directory having its\npermission bits changed during initdb)\n\nFeel free to suggest something if similar wording should be added for\nnon-POSIX systems.\n\nI intend to try and integrate something like the above to replace the\nexisting paragraph in the next version.\n\nThank you for the review!\n\nDavid J.\n\nP.S. I'm now looking at the very first paragraph to initdb more closely,\nnot liking \"single server instance\" all that much and wondering how to fit\nin \"cluster user\" there - possibly by saying something like \"...managed by\na single server process, and physical data directory, whose effective user\nand owner respectively is called the cluster user. That user must exist\nand be used to execute this program.\"\n\nThen the whole \"initdb must be run as...\" paragraph can probably just go\naway. Moving the commentary about \"root\", again a non-Windows thing, to\nthe notes area.\n\nOn Tue, Nov 1, 2022 at 5:20 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Nov 01, 2022 at 03:47:15PM -0700, David G. Johnston wrote:\n\nI think this is wrong:\n\n| https://www.postgresql.org/docs/devel/app-initdb.html\n| -U username\n| --username=username\n| \n| Selects the user name of the database superuser. This defaults to\n| the name of the effective user running initdb [...]\n\nIt's true that the user who runs initdb is typically named \"postgres\",\nbut that's only by convention.Thanks. I feel bad for missing this one given that I've been working on fixing up the default libpq user name wording. \n\n>+ This user owns all system catalog tables in each database. It also is the role\n>+ from which all granted permission originate. Because of these things this\n>+ role may not be dropped.\n\nplural permissions+1 \n\nthese commathings comma actually (+0.5)\n\n>+ While the <glossterm linkend=\"glossary-bootstrap-superuser\">bootstrap superuser</glossterm> is\n>+ a database superuser it has special obligations and restrictions that plain database superusers do not.\n\ncomma it+ 0.5\n\n> tables (tables that belong to the whole cluster rather than to any\n>- particular database), and creating the <literal>postgres</literal>,\n>- <literal>template1</literal>, and <literal>template0</literal> databases.\n>+ particular database), creating the <literal>postgres</literal>,\n>+ <literal>template1</literal>, and <literal>template0</literal> databases,\n>+ and creating the\n>+ <glossterm linkend=\"glossary-bootstrap-superuser\">boostrap superuser</glossterm>\n>+ (<literal>postgres</literal>, by default).\n\n\"postgres\" is wrongYep, will give this another look to see if anywhere but the actual option description wants to cover how this really works (or maybe just point the reader there).\n\n> For security reasons the new cluster created by <command>initdb</command>\n>- will only be accessible by the cluster owner by default. The\n>+ will only be accessible by the cluster user by default. The\n\nI prefer \"cluster owner\"I'll either need to change it back or fix the one in the next sentence...I'm still leaning toward continuing to use cluster user like everywhere else on the page instead of adding a new term. The fact that this doesn't work on Windows makes having it in the description section at all arguable. I'd rather rewrite it something like:\"On POSIX systems, the resulting data directory, and all of its contents, will have permissions of 700, though you can use --allow-group-access to instead get 750. In either case, the effective user running initdb will become the owner and group for the files created within the data directory.\"(I haven't tried to prove this owner:group dynamic, but having 700 or 750 and specifying the alternative does result in the directory having its permission bits changed during initdb)Feel free to suggest something if similar wording should be added for non-POSIX systems.I intend to try and integrate something like the above to replace the existing paragraph in the next version.Thank you for the review!David J.P.S. I'm now looking at the very first paragraph to initdb more closely, not liking \"single server instance\" all that much and wondering how to fit in \"cluster user\" there - possibly by saying something like \"...managed by a single server process, and physical data directory, whose effective user and owner respectively is called the cluster user. That user must exist and be used to execute this program.\"Then the whole \"initdb must be run as...\" paragraph can probably just go away. Moving the commentary about \"root\", again a non-Windows thing, to the notes area.",
"msg_date": "Tue, 1 Nov 2022 18:59:30 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Glossary and initdb definition work for \"superuser\" and\n database/cluster"
},
{
"msg_contents": "On Tue, Nov 1, 2022 at 6:59 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n>\n> P.S. I'm now looking at the very first paragraph to initdb more closely,\n> not liking \"single server instance\" all that much and wondering how to fit\n> in \"cluster user\" there - possibly by saying something like \"...managed by\n> a single server process, and physical data directory, whose effective user\n> and owner respectively is called the cluster user. That user must exist\n> and be used to execute this program.\"\n>\n> Then the whole \"initdb must be run as...\" paragraph can probably just go\n> away. Moving the commentary about \"root\", again a non-Windows thing, to\n> the notes area.\n>\n>\nVersion 2 attached, some significant re-working. Starting to think that\ninitdb isn't the place for some of this content - in particular the stuff\nI'm deciding to move down to the Notes section. Might consider moving some\nof it to the Server Setup and Operation chapter 19 - Creating Cluster (or\nnearby...) [1].\n\nI settled on \"cluster owner\" over \"cluster user\" and made the terminology\nconsistent throughout initdb and the glossary (haven't looked at chapter 19\nyet). Also added it to the glossary.\n\nMoved quite a bit of material to notes from the description and options and\nexpanded upon what had already been said based upon various discussions\nI've been part of on the mailing lists.\n\nDecided to call out, in the glossary, the effective equivalence of database\nsuperuser and cluster owner. Which acts as an explanation as to why root\nis prohibited to be a cluster owner.\n\nDavid J.\n\n[1] https://www.postgresql.org/docs/current/creating-cluster.html",
"msg_date": "Wed, 2 Nov 2022 10:48:22 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Glossary and initdb definition work for \"superuser\" and\n database/cluster"
},
{
"msg_contents": "On 2022-Nov-02, David G. Johnston wrote:\n\n> Version 2 attached, some significant re-working. Starting to think that\n> initdb isn't the place for some of this content - in particular the stuff\n> I'm deciding to move down to the Notes section. Might consider moving some\n> of it to the Server Setup and Operation chapter 19 - Creating Cluster (or\n> nearby...) [1].\n> \n> I settled on \"cluster owner\" over \"cluster user\" and made the terminology\n> consistent throughout initdb and the glossary (haven't looked at chapter 19\n> yet). Also added it to the glossary.\n\nGenerally speaking, I like the idea of documenting these things.\nHowever it sounds like you're not done with the wording and editing, so\nI'm not committing the whole patch, but it seems a good starting point\nto at least have some basic definitions. So I've extracted them from\nyour patch and pushed those. You can already see it at\nhttps://www.postgresql.org/docs/devel/glossary.html\n\nI left out almost all the material from the patch that's not in the\nglossary proper, and also a few phrases in the glossary itself. Some of\nthese sounded like security considerations rather than part of the\ndefinitions. I think we should have a separate chapter in Part III\n(Server Administration) that explains many security aspects; right now\nthere's no hope of collecting a lot of very important advice in a single\nplace, so a wannabe admin has no chance of getting things right. That\nseems to me a serious deficiency. A new chapter could provide a lot of\ngeneral advice on every aspect that needs to be considered, and link to\nthe reference section for additional details. Maybe part of these\ninitdb considerations could be there, too.\n\n> Moved quite a bit of material to notes from the description and options and\n> expanded upon what had already been said based upon various discussions\n> I've been part of on the mailing lists.\n\nPlease rebase.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n",
"msg_date": "Fri, 18 Nov 2022 12:11:33 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Glossary and initdb definition work for \"superuser\" and\n database/cluster"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 4:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Nov-02, David G. Johnston wrote:\n>\n> > Version 2 attached, some significant re-working. Starting to think that\n> > initdb isn't the place for some of this content - in particular the stuff\n> > I'm deciding to move down to the Notes section. Might consider moving\n> some\n> > of it to the Server Setup and Operation chapter 19 - Creating Cluster (or\n> > nearby...) [1].\n> >\n> > I settled on \"cluster owner\" over \"cluster user\" and made the terminology\n> > consistent throughout initdb and the glossary (haven't looked at chapter\n> 19\n> > yet). Also added it to the glossary.\n>\n> Generally speaking, I like the idea of documenting these things.\n> However it sounds like you're not done with the wording and editing, so\n> I'm not committing the whole patch, but it seems a good starting point\n> to at least have some basic definitions. So I've extracted them from\n> your patch and pushed those. You can already see it at\n> https://www.postgresql.org/docs/devel/glossary.html\n\n\nAgreed on the not quite ready yet, and that the glossary is indeed\nself-contained enough to go in by itself at this point. Thank you for\ndoing that.\n\n\n> I left out almost all the material from the patch that's not in the\n> glossary proper, and also a few phrases in the glossary itself. Some of\n> these sounded like security considerations rather than part of the\n> definitions. I think we should have a separate chapter in Part III\n> (Server Administration) that explains many security aspects; right now\n> there's no hope of collecting a lot of very important advice in a single\n> place, so a wannabe admin has no chance of getting things right. That\n> seems to me a serious deficiency. A new chapter could provide a lot of\n> general advice on every aspect that needs to be considered, and link to\n> the reference section for additional details. Maybe part of these\n> initdb considerations could be there, too.\n>\n\nI'll consider that approach as well as other spots in the documentation on\nthis next pass.\n\nDavid J.\n\nOn Fri, Nov 18, 2022 at 4:11 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Nov-02, David G. Johnston wrote:\n\n> Version 2 attached, some significant re-working. Starting to think that\n> initdb isn't the place for some of this content - in particular the stuff\n> I'm deciding to move down to the Notes section. Might consider moving some\n> of it to the Server Setup and Operation chapter 19 - Creating Cluster (or\n> nearby...) [1].\n> \n> I settled on \"cluster owner\" over \"cluster user\" and made the terminology\n> consistent throughout initdb and the glossary (haven't looked at chapter 19\n> yet). Also added it to the glossary.\n\nGenerally speaking, I like the idea of documenting these things.\nHowever it sounds like you're not done with the wording and editing, so\nI'm not committing the whole patch, but it seems a good starting point\nto at least have some basic definitions. So I've extracted them from\nyour patch and pushed those. You can already see it at\nhttps://www.postgresql.org/docs/devel/glossary.htmlAgreed on the not quite ready yet, and that the glossary is indeed self-contained enough to go in by itself at this point. Thank you for doing that.\n\nI left out almost all the material from the patch that's not in the\nglossary proper, and also a few phrases in the glossary itself. Some of\nthese sounded like security considerations rather than part of the\ndefinitions. I think we should have a separate chapter in Part III\n(Server Administration) that explains many security aspects; right now\nthere's no hope of collecting a lot of very important advice in a single\nplace, so a wannabe admin has no chance of getting things right. That\nseems to me a serious deficiency. A new chapter could provide a lot of\ngeneral advice on every aspect that needs to be considered, and link to\nthe reference section for additional details. Maybe part of these\ninitdb considerations could be there, too.I'll consider that approach as well as other spots in the documentation on this next pass.David J.",
"msg_date": "Fri, 18 Nov 2022 08:28:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Glossary and initdb definition work for \"superuser\" and\n database/cluster"
}
] |
[
{
"msg_contents": "There's a complaint at [1] about how you can't re-use the same\ncursor variable name within a routine called from another routine\nthat's already using that name. The complaint is itself a bit\nunder-documented, but I believe it is referring to this ancient\nbit of behavior:\n\n A bound cursor variable is initialized to the string value\n representing its name, so that the portal name is the same as\n the cursor variable name, unless the programmer overrides it\n by assignment before opening the cursor.\n\nSo if you try to nest usage of two bound cursor variables of the\nsame name, it blows up on the portal-name conflict. But it'll work\nfine if you use unbound cursors (i.e., plain \"refcursor\" variables):\n\n But an unbound cursor\n variable defaults to the null value initially, so it will receive\n an automatically-generated unique name, unless overridden.\n\nI wonder why we did it like that; maybe it's to be bug-compatible with\nsome Oracle PL/SQL behavior or other? Anyway, this seems non-orthogonal\nand contrary to all principles of structured programming. We don't even\noffer an example of the sort of usage that would benefit from it, ie\nthat calling code could \"just know\" what the portal name is.\n\nI propose that we should drop this auto initialization and let all\nrefcursor variables start out null, so that they'll get unique\nportal names unless you take explicit steps to do something else.\nAs attached.\n\n(Obviously this would be a HEAD-only fix, but maybe there's scope for\nimproving the back-branch docs along lines similar to these changes.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/166689990972.627.16269382598283029015%40wrigleys.postgresql.org",
"msg_date": "Tue, 01 Nov 2022 19:39:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "st 2. 11. 2022 v 0:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> There's a complaint at [1] about how you can't re-use the same\n> cursor variable name within a routine called from another routine\n> that's already using that name. The complaint is itself a bit\n> under-documented, but I believe it is referring to this ancient\n> bit of behavior:\n>\n> A bound cursor variable is initialized to the string value\n> representing its name, so that the portal name is the same as\n> the cursor variable name, unless the programmer overrides it\n> by assignment before opening the cursor.\n>\n> So if you try to nest usage of two bound cursor variables of the\n> same name, it blows up on the portal-name conflict. But it'll work\n> fine if you use unbound cursors (i.e., plain \"refcursor\" variables):\n>\n> But an unbound cursor\n> variable defaults to the null value initially, so it will receive\n> an automatically-generated unique name, unless overridden.\n>\n> I wonder why we did it like that; maybe it's to be bug-compatible with\n> some Oracle PL/SQL behavior or other? Anyway, this seems non-orthogonal\n> and contrary to all principles of structured programming. We don't even\n> offer an example of the sort of usage that would benefit from it, ie\n> that calling code could \"just know\" what the portal name is.\n>\n> I propose that we should drop this auto initialization and let all\n> refcursor variables start out null, so that they'll get unique\n> portal names unless you take explicit steps to do something else.\n> As attached.\n>\n\n+1\n\n\n> (Obviously this would be a HEAD-only fix, but maybe there's scope for\n> improving the back-branch docs along lines similar to these changes.)\n>\n\n+1\n\nI agree with this proposal. The current behavior breaks the nesting\nconcept.\n\nUnfortunately, it can breaks back compatibility, but I think so I am\npossible to detect phony usage of cursor's variables in plpgsql_check\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/166689990972.627.16269382598283029015%40wrigleys.postgresql.org\n>\n>\n\nst 2. 11. 2022 v 0:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:There's a complaint at [1] about how you can't re-use the same\ncursor variable name within a routine called from another routine\nthat's already using that name. The complaint is itself a bit\nunder-documented, but I believe it is referring to this ancient\nbit of behavior:\n\n A bound cursor variable is initialized to the string value\n representing its name, so that the portal name is the same as\n the cursor variable name, unless the programmer overrides it\n by assignment before opening the cursor.\n\nSo if you try to nest usage of two bound cursor variables of the\nsame name, it blows up on the portal-name conflict. But it'll work\nfine if you use unbound cursors (i.e., plain \"refcursor\" variables):\n\n But an unbound cursor\n variable defaults to the null value initially, so it will receive\n an automatically-generated unique name, unless overridden.\n\nI wonder why we did it like that; maybe it's to be bug-compatible with\nsome Oracle PL/SQL behavior or other? Anyway, this seems non-orthogonal\nand contrary to all principles of structured programming. We don't even\noffer an example of the sort of usage that would benefit from it, ie\nthat calling code could \"just know\" what the portal name is.\n\nI propose that we should drop this auto initialization and let all\nrefcursor variables start out null, so that they'll get unique\nportal names unless you take explicit steps to do something else.\nAs attached.+1 \n\n(Obviously this would be a HEAD-only fix, but maybe there's scope for\nimproving the back-branch docs along lines similar to these changes.)+1I agree with this proposal. The current behavior breaks the nesting concept.Unfortunately, it can breaks back compatibility, but I think so I am possible to detect phony usage of cursor's variables in plpgsql_check RegardsPavel\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/166689990972.627.16269382598283029015%40wrigleys.postgresql.org",
"msg_date": "Wed, 2 Nov 2022 03:51:07 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "Hi\n\n\nst 2. 11. 2022 v 0:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> There's a complaint at [1] about how you can't re-use the same\n> cursor variable name within a routine called from another routine\n> that's already using that name. The complaint is itself a bit\n> under-documented, but I believe it is referring to this ancient\n> bit of behavior:\n>\n> A bound cursor variable is initialized to the string value\n> representing its name, so that the portal name is the same as\n> the cursor variable name, unless the programmer overrides it\n> by assignment before opening the cursor.\n>\n> So if you try to nest usage of two bound cursor variables of the\n> same name, it blows up on the portal-name conflict. But it'll work\n> fine if you use unbound cursors (i.e., plain \"refcursor\" variables):\n>\n> But an unbound cursor\n> variable defaults to the null value initially, so it will receive\n> an automatically-generated unique name, unless overridden.\n>\n> I wonder why we did it like that; maybe it's to be bug-compatible with\n> some Oracle PL/SQL behavior or other? Anyway, this seems non-orthogonal\n> and contrary to all principles of structured programming. We don't even\n> offer an example of the sort of usage that would benefit from it, ie\n> that calling code could \"just know\" what the portal name is.\n>\n> I propose that we should drop this auto initialization and let all\n> refcursor variables start out null, so that they'll get unique\n> portal names unless you take explicit steps to do something else.\n> As attached.\n>\n> (Obviously this would be a HEAD-only fix, but maybe there's scope for\n> improving the back-branch docs along lines similar to these changes.)\n>\n>\nI am sending an review of this patch\n\n1. The patching, compilation without any problems\n2. All tests passed\n3. The implemented change is documented well\n4. Although this is potencial compatibility break, we want this feature. It\nallows to use cursors variables in recursive calls by default, it allows\nshadowing of cursor variables\n5. This patch is short and almost trivial, just remove code.\n\nI'll mark this patch as ready for commit\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/166689990972.627.16269382598283029015%40wrigleys.postgresql.org\n>\n>\n\nHist 2. 11. 2022 v 0:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:There's a complaint at [1] about how you can't re-use the same\ncursor variable name within a routine called from another routine\nthat's already using that name. The complaint is itself a bit\nunder-documented, but I believe it is referring to this ancient\nbit of behavior:\n\n A bound cursor variable is initialized to the string value\n representing its name, so that the portal name is the same as\n the cursor variable name, unless the programmer overrides it\n by assignment before opening the cursor.\n\nSo if you try to nest usage of two bound cursor variables of the\nsame name, it blows up on the portal-name conflict. But it'll work\nfine if you use unbound cursors (i.e., plain \"refcursor\" variables):\n\n But an unbound cursor\n variable defaults to the null value initially, so it will receive\n an automatically-generated unique name, unless overridden.\n\nI wonder why we did it like that; maybe it's to be bug-compatible with\nsome Oracle PL/SQL behavior or other? Anyway, this seems non-orthogonal\nand contrary to all principles of structured programming. We don't even\noffer an example of the sort of usage that would benefit from it, ie\nthat calling code could \"just know\" what the portal name is.\n\nI propose that we should drop this auto initialization and let all\nrefcursor variables start out null, so that they'll get unique\nportal names unless you take explicit steps to do something else.\nAs attached.\n\n(Obviously this would be a HEAD-only fix, but maybe there's scope for\nimproving the back-branch docs along lines similar to these changes.)\nI am sending an review of this patch1. The patching, compilation without any problems2. All tests passed3. The implemented change is documented well4. Although this is potencial compatibility break, we want this feature. It allows to use cursors variables in recursive calls by default, it allows shadowing of cursor variables5. This patch is short and almost trivial, just remove code.I'll mark this patch as ready for commitRegardsPavel \n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/166689990972.627.16269382598283029015%40wrigleys.postgresql.org",
"msg_date": "Fri, 4 Nov 2022 08:22:44 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "On 11/4/22 03:22, Pavel Stehule wrote:\n> Hi\n> \n> \n> st 2. 11. 2022 v 0:39 odesílatel Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> napsal:\n> \n> There's a complaint at [1] about how you can't re-use the same\n> cursor variable name within a routine called from another routine\n> that's already using that name. The complaint is itself a bit\n> under-documented, but I believe it is referring to this ancient\n> bit of behavior:\n> \n> A bound cursor variable is initialized to the string value\n> representing its name, so that the portal name is the same as\n> the cursor variable name, unless the programmer overrides it\n> by assignment before opening the cursor.\n> \n> So if you try to nest usage of two bound cursor variables of the\n> same name, it blows up on the portal-name conflict. But it'll work\n> fine if you use unbound cursors (i.e., plain \"refcursor\" variables):\n> \n> But an unbound cursor\n> variable defaults to the null value initially, so it will\n> receive\n> an automatically-generated unique name, unless overridden.\n> \n> I wonder why we did it like that; maybe it's to be bug-compatible with\n> some Oracle PL/SQL behavior or other? Anyway, this seems non-orthogonal\n> and contrary to all principles of structured programming. We don't even\n> offer an example of the sort of usage that would benefit from it, ie\n> that calling code could \"just know\" what the portal name is.\n> \n> I propose that we should drop this auto initialization and let all\n> refcursor variables start out null, so that they'll get unique\n> portal names unless you take explicit steps to do something else.\n> As attached.\n> \n> (Obviously this would be a HEAD-only fix, but maybe there's scope for\n> improving the back-branch docs along lines similar to these changes.)\n> \n> \n> I am sending an review of this patch\n> \n> 1. The patching, compilation without any problems\n> 2. All tests passed\n> 3. The implemented change is documented well\n> 4. Although this is potencial compatibility break, we want this feature. \n> It allows to use cursors variables in recursive calls by default, it \n> allows shadowing of cursor variables\n> 5. This patch is short and almost trivial, just remove code.\n> \n> I'll mark this patch as ready for commit\n\nI need to do some testing on this. I seem to recall that the naming was \noriginally done because a reference cursor is basically a named cursor \nthat can be handed around between functions and even the top SQL level \nof the application. For the latter to work the application needs to know \nthe name of the portal.\n\nI am currently down with Covid and have trouble focusing. But I hope to \nget to it some time next week.\n\n\nRegards, Jan\n\n\n\n",
"msg_date": "Fri, 4 Nov 2022 19:19:19 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> I need to do some testing on this. I seem to recall that the naming was \n> originally done because a reference cursor is basically a named cursor \n> that can be handed around between functions and even the top SQL level \n> of the application. For the latter to work the application needs to know \n> the name of the portal.\n\nRight. With this patch, it'd be necessary to hand back the actual\nportal name (by returning the refcursor value), or else manually\nset the refcursor value before OPEN to preserve the previous behavior.\nBut as far as I saw, all our documentation examples show handing back\nthe portal name, so I'm hoping most people do it like that already.\n\n> I am currently down with Covid and have trouble focusing. But I hope to \n> get to it some time next week.\n\nGet well soon!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Nov 2022 19:46:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "On 11/4/22 19:46, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> I need to do some testing on this. I seem to recall that the naming was \n>> originally done because a reference cursor is basically a named cursor \n>> that can be handed around between functions and even the top SQL level \n>> of the application. For the latter to work the application needs to know \n>> the name of the portal.\n> \n> Right. With this patch, it'd be necessary to hand back the actual\n> portal name (by returning the refcursor value), or else manually\n> set the refcursor value before OPEN to preserve the previous behavior.\n> But as far as I saw, all our documentation examples show handing back\n> the portal name, so I'm hoping most people do it like that already.\n\nI was mostly concerned that we may unintentionally break underdocumented \nbehavior that was originally implemented on purpose. As long as everyone \nis aware that this is breaking backwards compatibility in the way it \ndoes, that's fine.\n\n> \n>> I am currently down with Covid and have trouble focusing. But I hope to \n>> get to it some time next week.\n> \n> Get well soon!\n\nThanks, Jan\n\n\n\n",
"msg_date": "Mon, 7 Nov 2022 11:10:49 -0500",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "Dne po 7. 11. 2022 17:10 uživatel Jan Wieck <jan@wi3ck.info> napsal:\n\n> On 11/4/22 19:46, Tom Lane wrote:\n> > Jan Wieck <jan@wi3ck.info> writes:\n> >> I need to do some testing on this. I seem to recall that the naming was\n> >> originally done because a reference cursor is basically a named cursor\n> >> that can be handed around between functions and even the top SQL level\n> >> of the application. For the latter to work the application needs to\n> know\n> >> the name of the portal.\n> >\n> > Right. With this patch, it'd be necessary to hand back the actual\n> > portal name (by returning the refcursor value), or else manually\n> > set the refcursor value before OPEN to preserve the previous behavior.\n> > But as far as I saw, all our documentation examples show handing back\n> > the portal name, so I'm hoping most people do it like that already.\n>\n> I was mostly concerned that we may unintentionally break underdocumented\n> behavior that was originally implemented on purpose. As long as everyone\n> is aware that this is breaking backwards compatibility in the way it\n> does, that's fine.\n>\n\nIn this case I see current behaviors little bit unhappy. It breaks any\nrecursive call, it can break variable shadowing, so I prefer change. The\npossibility of compatibility break is clean, but there is an possibility of\neasy fix, and I think I can detect some possibly not compatible usage in\nplpgsql_check.\n\nThe dependency on current behavior can be probably just for pretty old\napplication that doesn't use refcursors.\n\nRegards\n\nPavel\n\n\n> >\n> >> I am currently down with Covid and have trouble focusing. But I hope to\n> >> get to it some time next week.\n> >\n> > Get well soon!\n>\n> Thanks, Jan\n>\n>\n\nDne po 7. 11. 2022 17:10 uživatel Jan Wieck <jan@wi3ck.info> napsal:On 11/4/22 19:46, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> I need to do some testing on this. I seem to recall that the naming was \n>> originally done because a reference cursor is basically a named cursor \n>> that can be handed around between functions and even the top SQL level \n>> of the application. For the latter to work the application needs to know \n>> the name of the portal.\n> \n> Right. With this patch, it'd be necessary to hand back the actual\n> portal name (by returning the refcursor value), or else manually\n> set the refcursor value before OPEN to preserve the previous behavior.\n> But as far as I saw, all our documentation examples show handing back\n> the portal name, so I'm hoping most people do it like that already.\n\nI was mostly concerned that we may unintentionally break underdocumented \nbehavior that was originally implemented on purpose. As long as everyone \nis aware that this is breaking backwards compatibility in the way it \ndoes, that's fine.In this case I see current behaviors little bit unhappy. It breaks any recursive call, it can break variable shadowing, so I prefer change. The possibility of compatibility break is clean, but there is an possibility of easy fix, and I think I can detect some possibly not compatible usage in plpgsql_check. The dependency on current behavior can be probably just for pretty old application that doesn't use refcursors.RegardsPavel\n\n> \n>> I am currently down with Covid and have trouble focusing. But I hope to \n>> get to it some time next week.\n> \n> Get well soon!\n\nThanks, Jan",
"msg_date": "Mon, 7 Nov 2022 17:32:42 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 11:10 AM Jan Wieck <jan@wi3ck.info> wrote:\n\n> On 11/4/22 19:46, Tom Lane wrote:\n> > Jan Wieck <jan@wi3ck.info> writes:\n> >> I need to do some testing on this. I seem to recall that the naming was\n> >> originally done because a reference cursor is basically a named cursor\n> >> that can be handed around between functions and even the top SQL level\n> >> of the application. For the latter to work the application needs to\n> know\n> >> the name of the portal.\n> >\n> > Right. With this patch, it'd be necessary to hand back the actual\n> > portal name (by returning the refcursor value), or else manually\n> > set the refcursor value before OPEN to preserve the previous behavior.\n> > But as far as I saw, all our documentation examples show handing back\n> > the portal name, so I'm hoping most people do it like that already.\n>\n> I was mostly concerned that we may unintentionally break underdocumented\n> behavior that was originally implemented on purpose. As long as everyone\n> is aware that this is breaking backwards compatibility in the way it\n> does, that's fine.\n>\n\nI respect the concern, and applied some deeper thinking to it...\n\nHere is the logic I am applying to this compatibility issue and what may\nbreak.\n[FWIW, my motto is to be wrong out loud, as you learn faster]\n\nAt first pass, I thought \"Well, since this does not break a refcursor,\nwhich is the obvious use case for RETURNING/PASSING, we are fine!\"\n\nBut in trying to DEFEND this case, I have come up with example of code\n(that makes some SENSE, but would break):\n\nCREATE FUNCTION test() RETURNS refcursor() LANGUAGE plpgsql AS $$\nDECLARE\n cur_this cursor FOR SELECT 1;\n ref_cur refcursor;\nBEGIN\n OPEN cur_this;\n ref_cur := 'cur_this'; -- Using the NAME of the cursor as the portal\nname: Should do: ref_cur := cur_this; -- Only works after OPEN\n RETURN ref_cur;\nEND;\n$$;\n\nAs noted in the comments. If the code were:\n ref_cur := 'cur_this'; -- Now you can't just use ref_cur := cur_this;\n OPEN cur_this;\n RETURN ref_cur;\nThen it would break now... And even the CORRECT syntax would break, since\nthe cursor was not opened, so \"cur_this\" is null.\n\nNow, I have NO IDEA if someone would actually do this. It is almost\npathological. The use case would be a complex cursor with parameters,\nand they changed the code to return a refcursor!\nThis was the ONLY use case I could think of that wasn't HACKY!\n\nHACKY use cases involve a child routine setting: local_ref_cursor :=\n'cur_this'; in order to access a cursor that was NOT passed to the child.\nFWIW, I tested this, and it works, and I can FETCH in the child routine,\nand it affects the parents' LOOP as it should... WOW. I would be HAPPY\nto break such horrible code, it has to be a security concern at some level.\n\nPersonally (and my 2 cents really shouldn't matter much), I think this\nshould still be fixed.\nBecause I believe this small use case is rare, it will break immediately,\nand the fix is trivial (just initialize cur_this := 'cur_this' in this\nexample),\nand the fix removes the Orthogonal Behavior Tom pointed out, which led me\nto reporting this.\n\nI think I have exhausted examples of how this impacts a VALID\nrefcursor implementation. I believe any other such versions are variations\nof this!\nAnd maybe we document that if a refcursor of a cursor is to be returned,\nthat the refcursor is ASSIGNED after the OPEN of the cursor, and it is done\nwithout the quotes, as:\n ref_cursor := cur_this; -- assign the name after opening.\n\nThanks!\n\nOn Mon, Nov 7, 2022 at 11:10 AM Jan Wieck <jan@wi3ck.info> wrote:On 11/4/22 19:46, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> I need to do some testing on this. I seem to recall that the naming was \n>> originally done because a reference cursor is basically a named cursor \n>> that can be handed around between functions and even the top SQL level \n>> of the application. For the latter to work the application needs to know \n>> the name of the portal.\n> \n> Right. With this patch, it'd be necessary to hand back the actual\n> portal name (by returning the refcursor value), or else manually\n> set the refcursor value before OPEN to preserve the previous behavior.\n> But as far as I saw, all our documentation examples show handing back\n> the portal name, so I'm hoping most people do it like that already.\n\nI was mostly concerned that we may unintentionally break underdocumented \nbehavior that was originally implemented on purpose. As long as everyone \nis aware that this is breaking backwards compatibility in the way it \ndoes, that's fine.I respect the concern, and applied some deeper thinking to it... Here is the logic I am applying to this compatibility issue and what may break.[FWIW, my motto is to be wrong out loud, as you learn faster]At first pass, I thought \"Well, since this does not break a refcursor, which is the obvious use case for RETURNING/PASSING, we are fine!\"But in trying to DEFEND this case, I have come up with example of code (that makes some SENSE, but would break):CREATE FUNCTION test() RETURNS refcursor() LANGUAGE plpgsql AS $$DECLARE cur_this cursor FOR SELECT 1; ref_cur refcursor;BEGIN OPEN cur_this; ref_cur := 'cur_this'; -- Using the NAME of the cursor as the portal name: Should do: ref_cur := cur_this; -- Only works after OPEN RETURN ref_cur;END;$$;As noted in the comments. If the code were: ref_cur := 'cur_this'; -- Now you can't just use ref_cur := cur_this; OPEN cur_this; RETURN ref_cur;Then it would break now... And even the CORRECT syntax would break, since the cursor was not opened, so \"cur_this\" is null.Now, I have NO IDEA if someone would actually do this. It is almost pathological. The use case would be a complex cursor with parameters,and they changed the code to return a refcursor!This was the ONLY use case I could think of that wasn't HACKY!HACKY use cases involve a child routine setting: local_ref_cursor := 'cur_this'; in order to access a cursor that was NOT passed to the child.FWIW, I tested this, and it works, and I can FETCH in the child routine, and it affects the parents' LOOP as it should... WOW. I would be HAPPYto break such horrible code, it has to be a security concern at some level.Personally (and my 2 cents really shouldn't matter much), I think this should still be fixed. Because I believe this small use case is rare, it will break immediately, and the fix is trivial (just initialize cur_this := 'cur_this' in this example),and the fix removes the Orthogonal Behavior Tom pointed out, which led me to reporting this.I think I have exhausted examples of how this impacts a VALID refcursor implementation. I believe any other such versions are variations of this!And maybe we document that if a refcursor of a cursor is to be returned, that the refcursor is ASSIGNED after the OPEN of the cursor, and it is done without the quotes, as: ref_cursor := cur_this; -- assign the name after opening.Thanks!",
"msg_date": "Mon, 7 Nov 2022 11:57:37 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "My comments were in no way meant as an argument for or against the \nchange itself. Only to clearly document the side effect it will have.\n\n\nRegards, Jan\n\n\nOn 11/7/22 11:57, Kirk Wolak wrote:\n> \n> \n> On Mon, Nov 7, 2022 at 11:10 AM Jan Wieck <jan@wi3ck.info \n> <mailto:jan@wi3ck.info>> wrote:\n> \n> On 11/4/22 19:46, Tom Lane wrote:\n> > Jan Wieck <jan@wi3ck.info <mailto:jan@wi3ck.info>> writes:\n> >> I need to do some testing on this. I seem to recall that the\n> naming was\n> >> originally done because a reference cursor is basically a named\n> cursor\n> >> that can be handed around between functions and even the top SQL\n> level\n> >> of the application. For the latter to work the application needs\n> to know\n> >> the name of the portal.\n> >\n> > Right. With this patch, it'd be necessary to hand back the actual\n> > portal name (by returning the refcursor value), or else manually\n> > set the refcursor value before OPEN to preserve the previous\n> behavior.\n> > But as far as I saw, all our documentation examples show handing back\n> > the portal name, so I'm hoping most people do it like that already.\n> \n> I was mostly concerned that we may unintentionally break\n> underdocumented\n> behavior that was originally implemented on purpose. As long as\n> everyone\n> is aware that this is breaking backwards compatibility in the way it\n> does, that's fine.\n> \n> \n> I respect the concern, and applied some deeper thinking to it...\n> \n> Here is the logic I am applying to this compatibility issue and what may \n> break.\n> [FWIW, my motto is to be wrong out loud, as you learn faster]\n> \n> At first pass, I thought \"Well, since this does not break a refcursor, \n> which is the obvious use case for RETURNING/PASSING, we are fine!\"\n> \n> But in trying to DEFEND this case, I have come up with example of code \n> (that makes some SENSE, but would break):\n> \n> CREATE FUNCTION test() RETURNS refcursor() LANGUAGE plpgsql AS $$\n> DECLARE\n> cur_this cursor FOR SELECT 1;\n> ref_cur refcursor;\n> BEGIN\n> OPEN cur_this;\n> ref_cur := 'cur_this'; -- Using the NAME of the cursor as the \n> portal name: Should do: ref_cur := cur_this; -- Only works after OPEN\n> RETURN ref_cur;\n> END;\n> $$;\n> \n> As noted in the comments. If the code were:\n> ref_cur := 'cur_this'; -- Now you can't just use ref_cur := cur_this;\n> OPEN cur_this;\n> RETURN ref_cur;\n> Then it would break now... And even the CORRECT syntax would break, \n> since the cursor was not opened, so \"cur_this\" is null.\n> \n> Now, I have NO IDEA if someone would actually do this. It is almost \n> pathological. The use case would be a complex cursor with parameters,\n> and they changed the code to return a refcursor!\n> This was the ONLY use case I could think of that wasn't HACKY!\n> \n> HACKY use cases involve a child routine setting: local_ref_cursor := \n> 'cur_this'; in order to access a cursor that was NOT passed to the child.\n> FWIW, I tested this, and it works, and I can FETCH in the child routine, \n> and it affects the parents' LOOP as it should... WOW. I would be HAPPY\n> to break such horrible code, it has to be a security concern at some level.\n> \n> Personally (and my 2 cents really shouldn't matter much), I think this \n> should still be fixed.\n> Because I believe this small use case is rare, it will break \n> immediately, and the fix is trivial (just initialize cur_this := \n> 'cur_this' in this example),\n> and the fix removes the Orthogonal Behavior Tom pointed out, which led \n> me to reporting this.\n> \n> I think I have exhausted examples of how this impacts a VALID \n> refcursor implementation. I believe any other such versions are \n> variations of this!\n> And maybe we document that if a refcursor of a cursor is to be returned, \n> that the refcursor is ASSIGNED after the OPEN of the cursor, and it is \n> done without the quotes, as:\n> ref_cursor := cur_this; -- assign the name after opening.\n> \n> Thanks!\n> \n> \n\n\n\n",
"msg_date": "Mon, 7 Nov 2022 16:54:29 -0500",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
},
{
"msg_contents": "Hi\r\n\r\nI wrote a new check in plpgsql_check, that tries to identify explicit work\r\nwith the name of the referenced portal.\r\n\r\ncreate or replace function foo01()\r\nreturns refcursor as $$#option dump\r\ndeclare\r\n c cursor for select 1;\r\n r refcursor;\r\nbegin\r\n open c;\r\n r := 'c';\r\n return r;\r\nend;\r\n$$ language plpgsql;\r\nCREATE FUNCTION\r\n(2023-01-09 16:49:10) postgres=# select * from\r\nplpgsql_check_function('foo01', compatibility_warnings => true);\r\n┌───────────────────────────────────────────────────────────────────────────────────┐\r\n│ plpgsql_check_function\r\n │\r\n╞═══════════════════════════════════════════════════════════════════════════════════╡\r\n│ compatibility:00000:7:assignment:obsolete setting of refcursor or cursor\r\nvariable │\r\n│ Detail: Internal name of cursor should not be specified by users.\r\n │\r\n│ Context: at assignment to variable \"r\" declared on line 4\r\n │\r\n│ warning extra:00000:3:DECLARE:never read variable \"c\"\r\n │\r\n└───────────────────────────────────────────────────────────────────────────────────┘\r\n(4 rows)\r\n\r\nRegards\r\n\r\nPavel\r\n\nHiI wrote a new check in plpgsql_check, that tries to identify explicit work with the name of the referenced portal.create or replace function foo01()returns refcursor as $$#option dumpdeclare c cursor for select 1; r refcursor;begin open c; r := 'c'; return r;end;$$ language plpgsql;CREATE FUNCTION(2023-01-09 16:49:10) postgres=# select * from plpgsql_check_function('foo01', compatibility_warnings => true);┌───────────────────────────────────────────────────────────────────────────────────┐│ plpgsql_check_function │╞═══════════════════════════════════════════════════════════════════════════════════╡│ compatibility:00000:7:assignment:obsolete setting of refcursor or cursor variable ││ Detail: Internal name of cursor should not be specified by users. ││ Context: at assignment to variable \"r\" declared on line 4 ││ warning extra:00000:3:DECLARE:never read variable \"c\" │└───────────────────────────────────────────────────────────────────────────────────┘(4 rows)RegardsPavel",
"msg_date": "Mon, 9 Jan 2023 16:50:29 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PL/pgSQL cursors should get generated portal names by default"
}
] |
[
{
"msg_contents": "In the past we pull-up the ANY-sublink with 2 steps, the first step is to\npull up the sublink as a subquery, and the next step is to pull up the\nsubquery if it is allowed. The benefits of this method are obvious,\npulling up the subquery has more requirements, even if we can just finish\nthe first step, we still get huge benefits. However the bad stuff happens\nif varlevelsup = 1 involves, things fail at step 1.\n\nconvert_ANY_sublink_to_join ...\n\n if (contain_vars_of_level((Node *) subselect, 1))\n return NULL;\n\nIn this patch we distinguish the above case and try to pull-up it within\none step if it is helpful, It looks to me that what we need to do is just\ntransform it to EXIST-SUBLINK.\n\nThe only change is transforming the format of SUBLINK, so outer-join /\npull-up as semi-join is unrelated, so the correctness should not be an\nissue.\n\nI can help with the following query very much.\n\nmaster:\nexplain (costs off, analyze) select * from tenk1 t1\nwhere hundred in (select hundred from tenk2 t2\n where t2.odd = t1.odd\n and even in (select even from tenk1 t3\n where t3.fivethous = t2.fivethous))\nand even > 0;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Seq Scan on tenk1 t1 (actual time=0.023..234.955 rows=10000 loops=1)\n Filter: ((even > 0) AND (SubPlan 2))\n SubPlan 2\n -> Seq Scan on tenk2 t2 (actual time=0.023..0.023 rows=1 loops=10000)\n Filter: ((odd = t1.odd) AND (SubPlan 1))\n Rows Removed by Filter: 94\n SubPlan 1\n -> Seq Scan on tenk1 t3 (actual time=0.011..0.011 rows=1\nloops=10000)\n Filter: (fivethous = t2.fivethous)\n Rows Removed by Filter: 94\n Planning Time: 0.169 ms\n Execution Time: 235.488 ms\n(12 rows)\n\npatched:\n\nexplain (costs off, analyze) select * from tenk1 t1\nwhere hundred in (select hundred from tenk2 t2\n where t2.odd = t1.odd\n and even in (select even from tenk1 t3\n where t3.fivethous = t2.fivethous))\nand even > 0;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Hash Join (actual time=13.102..17.676 rows=10000 loops=1)\n Hash Cond: ((t1.odd = t2.odd) AND (t1.hundred = t2.hundred))\n -> Seq Scan on tenk1 t1 (actual time=0.014..1.702 rows=10000 loops=1)\n Filter: (even > 0)\n -> Hash (actual time=13.080..13.082 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n -> HashAggregate (actual time=13.041..13.060 rows=100 loops=1)\n Group Key: t2.odd, t2.hundred\n Batches: 1 Memory Usage: 73kB\n -> Hash Join (actual time=8.044..11.296 rows=10000 loops=1)\n Hash Cond: ((t3.fivethous = t2.fivethous) AND (t3.even\n= t2.even))\n -> HashAggregate (actual time=4.054..4.804 rows=5000\nloops=1)\n Group Key: t3.fivethous, t3.even\n Batches: 1 Memory Usage: 465kB\n -> Seq Scan on tenk1 t3 (actual\ntime=0.002..0.862 rows=10000 loops=1)\n -> Hash (actual time=3.962..3.962 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 597kB\n -> Seq Scan on tenk2 t2 (actual\ntime=0.004..2.289 rows=10000 loops=1)\n Planning Time: 0.426 ms\n Execution Time: 18.129 ms\n(20 rows)\n\nThe execution time is 33ms (patched) VS 235ms (master).\nThe planning time is 0.426ms (patched) VS 0.169ms (master).\n\nI think the extra planning time comes from the search space increasing a\nlot and that's where the better plan comes.\n\nI used below queries to measure how much effort we made but got nothing:\nrun twice in 1 session and just count the second planning time.\n\nexplain (costs off, analyze) select * from tenk1 t1\nwhere\n(hundred, odd) in (select hundred, odd from tenk2 t2\n where (even, fivethous) in\n (select even, fivethous from tenk1 t3));\n\n\npsql regression -f 1.sql | grep 'Planning Time' | tail -1\n\nmaster:\n\nPlanning Time: 0.430 ms\nPlanning Time: 0.551 ms\nPlanning Time: 0.316 ms\nPlanning Time: 0.342 ms\nPlanning Time: 0.390 ms\n\npatched:\n\nPlanning Time: 0.405 ms\nPlanning Time: 0.406 ms\nPlanning Time: 0.433 ms\nPlanning Time: 0.371 ms\nPlanning Time: 0.425 ms\n\n\nI think this can show us the extra planning effort is pretty low.\n\nThis topic has been raised many times, at least at [1] [2]. and even MySQL\ncan support some simple but common cases. I think we can do something\nhelpful as well. Any feedback is welcome.\n\n[1] https://www.postgresql.org/message-id/3691.1342650974%40sss.pgh.pa.us\n[2]\nhttps://www.postgresql.org/message-id/CAN_9JTx7N+CxEQLnu_uHxx+EscSgxLLuNgaZT6Sjvdpt7toy3w@mail.gmail.com\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 2 Nov 2022 11:02:58 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On 2/11/2022 09:02, Andy Fan wrote:\n> In the past we pull-up the ANY-sublink with 2 steps, the first step is to\n> pull up the sublink as a subquery, and the next step is to pull up the\n> subquery if it is allowed. The benefits of this method are obvious,\n> pulling up the subquery has more requirements, even if we can just finish\n> the first step, we still get huge benefits. However the bad stuff happens\n> if varlevelsup = 1 involves, things fail at step 1.\n> \n> convert_ANY_sublink_to_join ...\n> \n> if (contain_vars_of_level((Node *) subselect, 1))\n> return NULL;\n> \n> In this patch we distinguish the above case and try to pull-up it within\n> one step if it is helpful, It looks to me that what we need to do is just\n> transform it to EXIST-SUBLINK.\nMaybe code [1] would be useful for your purposes/tests.\nWe implemented flattening of correlated subqueries for simple N-J case, \nbut found out that in some cases the flattening isn't obvious the best \nsolution - we haven't info about cardinality/cost estimations and can do \nworse.\nI guess, for more complex flattening procedure (with aggregate function \nin a targetlist of correlated subquery) situation can be even worse.\nMaybe your idea has such corner cases too ?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CALNJ-vTa5VgvV1NPRHnypdnbx-fhDu7vWp73EkMUbZRpNHTYQQ%40mail.gmail.com\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Wed, 2 Nov 2022 09:42:28 +0600",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Andrey:\n\n\n> > In this patch we distinguish the above case and try to pull-up it within\n> > one step if it is helpful, It looks to me that what we need to do is just\n> > transform it to EXIST-SUBLINK.\n> Maybe code [1] would be useful for your purposes/tests.\n>\n\nLooks like we are resolving the same problem, IIUC, great that more\npeople are interested in it!\n\nWe implemented flattening of correlated subqueries for simple N-J case,\n\n\nI went through the code, and it looks like you tried to do the pull-up by\nyourself, which would have many troubles to think about. but I just\ntransformed\nit into EXIST sublink after I distinguish it as the case I can improve.\n> The only change is transforming the format of SUBLINK, so outer-join /\n> pull-up as semi-join is unrelated, so the correctness should not be an\n> issue.\n\nThat is just a difference, no matter which one is better.\n\nbut found out that in some cases the flattening isn't obvious the best\n> solution - we haven't info about cardinality/cost estimations and can do\n> worse.\n\nI guess, for more complex flattening procedure (with aggregate function\n> in a targetlist of correlated subquery) situation can be even worse.\n> Maybe your idea has such corner cases too ?\n>\n\nIn my case, since aggregate function can't be handled by\ncovert_EXISTS_sublink_to_join, so it is not the target I want to optimize in\nthis patch. More testing/review on my method would be pretty appreciated.\nbut I'm not insisting on my method at all. Link [2] might be useful as\nwell.\n\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWpi9oztiomUQt4JCxXEr6EaQ2thY-7JYDm6c9he0A7oCA%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nHi Andrey: > In this patch we distinguish the above case and try to pull-up it within\n> one step if it is helpful, It looks to me that what we need to do is just\n> transform it to EXIST-SUBLINK.\nMaybe code [1] would be useful for your purposes/tests.Looks like we are resolving the same problem, IIUC, great that morepeople are interested in it!\nWe implemented flattening of correlated subqueries for simple N-J case,I went through the code, and it looks like you tried to do the pull-up byyourself, which would have many troubles to think about. but I just transformedit into EXIST sublink after I distinguish it as the case I can improve. > The only change is transforming the format of SUBLINK, so outer-join /> pull-up as semi-join is unrelated, so the correctness should not be an> issue.That is just a difference, no matter which one is better. \nbut found out that in some cases the flattening isn't obvious the best \nsolution - we haven't info about cardinality/cost estimations and can do \nworse. \nI guess, for more complex flattening procedure (with aggregate function \nin a targetlist of correlated subquery) situation can be even worse.\nMaybe your idea has such corner cases too ? In my case, since aggregate function can't be handled by covert_EXISTS_sublink_to_join, so it is not the target I want to optimize inthis patch. More testing/review on my method would be pretty appreciated. but I'm not insisting on my method at all. Link [2] might be useful as well.\n [2] https://www.postgresql.org/message-id/CAKU4AWpi9oztiomUQt4JCxXEr6EaQ2thY-7JYDm6c9he0A7oCA%40mail.gmail.com -- Best RegardsAndy Fan",
"msg_date": "Wed, 2 Nov 2022 13:34:54 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> In the past we pull-up the ANY-sublink with 2 steps, the first step is to\n> pull up the sublink as a subquery, and the next step is to pull up the\n> subquery if it is allowed. The benefits of this method are obvious,\n> pulling up the subquery has more requirements, even if we can just finish\n> the first step, we still get huge benefits. However the bad stuff happens\n> if varlevelsup = 1 involves, things fail at step 1.\n\n> convert_ANY_sublink_to_join ...\n\n> if (contain_vars_of_level((Node *) subselect, 1))\n> return NULL;\n\n> In this patch we distinguish the above case and try to pull-up it within\n> one step if it is helpful, It looks to me that what we need to do is just\n> transform it to EXIST-SUBLINK.\n\nThis patch seems awfully messy to me. The fact that you're having to\nduplicate stuff done elsewhere suggests at the least that you've not\nplugged the code into the best place.\n\nLooking again at that contain_vars_of_level restriction, I think the\nreason for it was just to avoid making a FROM subquery that has outer\nreferences, and the reason we needed to avoid that was merely that we\ndidn't have LATERAL at the time. So I experimented with the attached.\nIt seems to work, in that we don't get wrong answers from any of the\nsmall number of places that are affected. (I wonder though whether\nthose test cases still test what they were intended to, particularly\nthe postgres_fdw one. We might have to try to hack them some more\nto not get affected by this optimization.) Could do with more test\ncases, no doubt.\n\nOne thing I'm not at all clear about is whether we need to restrict\nthe optimization so that it doesn't occur if the subquery contains\nouter references falling outside available_rels. I think that that\ncase is covered by is_simple_subquery() deciding later to not pull up\nthe subquery based on LATERAL restrictions, but maybe that misses\nsomething.\n\nI'm also wondering whether the similar restriction in\nconvert_EXISTS_sublink_to_join could be removed similarly.\nIn this light it was a mistake for convert_EXISTS_sublink_to_join\nto manage the pullup itself rather than doing it in the two-step\nfashion that convert_ANY_sublink_to_join does it.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 12 Nov 2022 17:45:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On Sun, Nov 13, 2022 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Looking again at that contain_vars_of_level restriction, I think the\n> reason for it was just to avoid making a FROM subquery that has outer\n> references, and the reason we needed to avoid that was merely that we\n> didn't have LATERAL at the time. So I experimented with the attached.\n> It seems to work, in that we don't get wrong answers from any of the\n> small number of places that are affected. (I wonder though whether\n> those test cases still test what they were intended to, particularly\n> the postgres_fdw one. We might have to try to hack them some more\n> to not get affected by this optimization.) Could do with more test\n> cases, no doubt.\n\n\nHmm, it seems there were discussions about this change before, such as\nin [1].\n\n\n> One thing I'm not at all clear about is whether we need to restrict\n> the optimization so that it doesn't occur if the subquery contains\n> outer references falling outside available_rels. I think that that\n> case is covered by is_simple_subquery() deciding later to not pull up\n> the subquery based on LATERAL restrictions, but maybe that misses\n> something.\n\n\nI think we need to do this, otherwise we'd encounter the problem\ndescribed in [2]. In short, the problem is that the constraints imposed\nby LATERAL references may make us fail to find any legal join order. As\nan example, consider\n\nexplain select * from A where exists\n (select * from B where A.i in (select C.i from C where C.j = B.j));\nERROR: failed to build any 3-way joins\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAN_9JTx7N%2BCxEQLnu_uHxx%2BEscSgxLLuNgaZT6Sjvdpt7toy3w%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS=D5TY=4KGHEQcfHPZCXS1GVhkA@mail.gmail.com\n\nThanks\nRichard\n\nOn Sun, Nov 13, 2022 at 6:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nLooking again at that contain_vars_of_level restriction, I think the\nreason for it was just to avoid making a FROM subquery that has outer\nreferences, and the reason we needed to avoid that was merely that we\ndidn't have LATERAL at the time. So I experimented with the attached.\nIt seems to work, in that we don't get wrong answers from any of the\nsmall number of places that are affected. (I wonder though whether\nthose test cases still test what they were intended to, particularly\nthe postgres_fdw one. We might have to try to hack them some more\nto not get affected by this optimization.) Could do with more test\ncases, no doubt. Hmm, it seems there were discussions about this change before, such asin [1]. \nOne thing I'm not at all clear about is whether we need to restrict\nthe optimization so that it doesn't occur if the subquery contains\nouter references falling outside available_rels. I think that that\ncase is covered by is_simple_subquery() deciding later to not pull up\nthe subquery based on LATERAL restrictions, but maybe that misses\nsomething. I think we need to do this, otherwise we'd encounter the problemdescribed in [2]. In short, the problem is that the constraints imposedby LATERAL references may make us fail to find any legal join order. Asan example, considerexplain select * from A where exists (select * from B where A.i in (select C.i from C where C.j = B.j));ERROR: failed to build any 3-way joins[1] https://www.postgresql.org/message-id/flat/CAN_9JTx7N%2BCxEQLnu_uHxx%2BEscSgxLLuNgaZT6Sjvdpt7toy3w%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAMbWs49cvkF9akbomz_fCCKS=D5TY=4KGHEQcfHPZCXS1GVhkA@mail.gmail.comThanksRichard",
"msg_date": "Tue, 15 Nov 2022 09:02:13 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On Sun, 13 Nov 2022 at 04:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > In the past we pull-up the ANY-sublink with 2 steps, the first step is to\n> > pull up the sublink as a subquery, and the next step is to pull up the\n> > subquery if it is allowed. The benefits of this method are obvious,\n> > pulling up the subquery has more requirements, even if we can just finish\n> > the first step, we still get huge benefits. However the bad stuff happens\n> > if varlevelsup = 1 involves, things fail at step 1.\n>\n> > convert_ANY_sublink_to_join ...\n>\n> > if (contain_vars_of_level((Node *) subselect, 1))\n> > return NULL;\n>\n> > In this patch we distinguish the above case and try to pull-up it within\n> > one step if it is helpful, It looks to me that what we need to do is just\n> > transform it to EXIST-SUBLINK.\n>\n> This patch seems awfully messy to me. The fact that you're having to\n> duplicate stuff done elsewhere suggests at the least that you've not\n> plugged the code into the best place.\n>\n> Looking again at that contain_vars_of_level restriction, I think the\n> reason for it was just to avoid making a FROM subquery that has outer\n> references, and the reason we needed to avoid that was merely that we\n> didn't have LATERAL at the time. So I experimented with the attached.\n> It seems to work, in that we don't get wrong answers from any of the\n> small number of places that are affected. (I wonder though whether\n> those test cases still test what they were intended to, particularly\n> the postgres_fdw one. We might have to try to hack them some more\n> to not get affected by this optimization.) Could do with more test\n> cases, no doubt.\n>\n> One thing I'm not at all clear about is whether we need to restrict\n> the optimization so that it doesn't occur if the subquery contains\n> outer references falling outside available_rels. I think that that\n> case is covered by is_simple_subquery() deciding later to not pull up\n> the subquery based on LATERAL restrictions, but maybe that misses\n> something.\n>\n> I'm also wondering whether the similar restriction in\n> convert_EXISTS_sublink_to_join could be removed similarly.\n> In this light it was a mistake for convert_EXISTS_sublink_to_join\n> to manage the pullup itself rather than doing it in the two-step\n> fashion that convert_ANY_sublink_to_join does it.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nb82557ecc2ebbf649142740a1c5ce8d19089f620 ===\n=== applying patch ./v2-0001-use-LATERAL-for-ANY_SUBLINK.patch\npatching file contrib/postgres_fdw/expected/postgres_fdw.out\n...\nHunk #2 FAILED at 6074.\nHunk #3 FAILED at 6087.\n2 out of 3 hunks FAILED -- saving rejects to file\nsrc/test/regress/expected/join.out.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3941.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:46:23 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On Fri, 6 Jan 2023 at 11:46, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, 13 Nov 2022 at 04:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > > In the past we pull-up the ANY-sublink with 2 steps, the first step is to\n> > > pull up the sublink as a subquery, and the next step is to pull up the\n> > > subquery if it is allowed. The benefits of this method are obvious,\n> > > pulling up the subquery has more requirements, even if we can just finish\n> > > the first step, we still get huge benefits. However the bad stuff happens\n> > > if varlevelsup = 1 involves, things fail at step 1.\n> >\n> > > convert_ANY_sublink_to_join ...\n> >\n> > > if (contain_vars_of_level((Node *) subselect, 1))\n> > > return NULL;\n> >\n> > > In this patch we distinguish the above case and try to pull-up it within\n> > > one step if it is helpful, It looks to me that what we need to do is just\n> > > transform it to EXIST-SUBLINK.\n> >\n> > This patch seems awfully messy to me. The fact that you're having to\n> > duplicate stuff done elsewhere suggests at the least that you've not\n> > plugged the code into the best place.\n> >\n> > Looking again at that contain_vars_of_level restriction, I think the\n> > reason for it was just to avoid making a FROM subquery that has outer\n> > references, and the reason we needed to avoid that was merely that we\n> > didn't have LATERAL at the time. So I experimented with the attached.\n> > It seems to work, in that we don't get wrong answers from any of the\n> > small number of places that are affected. (I wonder though whether\n> > those test cases still test what they were intended to, particularly\n> > the postgres_fdw one. We might have to try to hack them some more\n> > to not get affected by this optimization.) Could do with more test\n> > cases, no doubt.\n> >\n> > One thing I'm not at all clear about is whether we need to restrict\n> > the optimization so that it doesn't occur if the subquery contains\n> > outer references falling outside available_rels. I think that that\n> > case is covered by is_simple_subquery() deciding later to not pull up\n> > the subquery based on LATERAL restrictions, but maybe that misses\n> > something.\n> >\n> > I'm also wondering whether the similar restriction in\n> > convert_EXISTS_sublink_to_join could be removed similarly.\n> > In this light it was a mistake for convert_EXISTS_sublink_to_join\n> > to manage the pullup itself rather than doing it in the two-step\n> > fashion that convert_ANY_sublink_to_join does it.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> b82557ecc2ebbf649142740a1c5ce8d19089f620 ===\n> === applying patch ./v2-0001-use-LATERAL-for-ANY_SUBLINK.patch\n> patching file contrib/postgres_fdw/expected/postgres_fdw.out\n> ...\n> Hunk #2 FAILED at 6074.\n> Hunk #3 FAILED at 6087.\n> 2 out of 3 hunks FAILED -- saving rejects to file\n> src/test/regress/expected/join.out.rej\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:17:31 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi All:\n\n Sorry for the delay. Once I saw Tom's reply on Nov 15, I tried his\nsuggestion about \"whether we need to restrict the optimization so\nthat it doesn't occur if the subquery contains outer references\nfalling outside available_rels. \" quickly, I'm sure such a restriction\ncan fix the bad case Richard provided. But even Tom \"I'm not at\nall clear about ..\", I'd like to prepare myself better for this\ndiscussion and that took some time. Then an internal urgent\nproject occupied my attention, the project is still in-progress\nnow:(\n\n\n> There has been no updates on this thread for some time, so this has\n> been switched as Returned with Feedback. Feel free to open it in the\n> next commitfest if you plan to continue on this.\n>\n>\nThank you vignesh C for this, I didn't give up yet, probably I can\ncome back in the following month.\n\n-- \nBest Regards\nAndy Fan\n\nHi All: Sorry for the delay. Once I saw Tom's reply on Nov 15, I tried hissuggestion about \"whether we need to restrict the optimization sothat it doesn't occur if the subquery contains outer references falling outside available_rels. \" quickly, I'm sure such a restrictioncan fix the bad case Richard provided. But even Tom \"I'm not at all clear about ..\", I'd like to prepare myself better for thisdiscussion and that took some time. Then an internal urgentproject occupied my attention, the project is still in-progressnow:( \n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.Thank you vignesh C for this, I didn't give up yet, probably I cancome back in the following month. -- Best RegardsAndy Fan",
"msg_date": "Tue, 14 Feb 2023 10:11:23 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Tom:\n\n Sorry for the delayed response! I think my knowledge has been refreshed\n for this discussion.\n\n\n> One thing I'm not at all clear about is whether we need to restrict\n> the optimization so that it doesn't occur if the subquery contains\n> outer references falling outside available_rels. I think that that\n> case is covered by is_simple_subquery() deciding later to not pull up\n> the subquery based on LATERAL restrictions, but maybe that misses\n> something.\n>\n\nI think we need the restriction and that should be enough for this feature\n. Given the query Richard provided before:\n\nexplain\nselect * from tenk1 A where exists\n(select 1 from tenk2 B\nwhere A.hundred in (select C.hundred FROM tenk2 C\nWHERE c.odd = b.odd));\n\nIt first can be converted to the below format without any issue.\n\nSELECT * FROM tenk1 A SEMI JOIN tenk2 B\non A.hundred in (select C.hundred FROM tenk2 C\nWHERE c.odd = b.odd);\n\nThen without the restriction, since we only pull the varnos from\nsublink->testexpr, then it is {A}, so it convert to\n\nSELECT * FROM\n(tenk1 A SEMI JOIN LATERAL (SELECT c.hundred FROM tenk2 C)\nON c.odd = b.odd AND a.hundred = v.hundred)\nSEMI JOIN on tenk2 B ON TRUE;\n\nthen the above query is NOT A VALID QUERY since:\n1. The above query is *not* same as\n\nSELECT * FROM (tenk1 A SEMI JOIN tenk2 B) on true\nSEMI JOIN LATERAL (SELECT c.hundred FROM tenk2 C) v\nON v.odd = b.odd;\n\n2. The above query requires b.odd when B is not available. So it is\nright that an optimizer can't generate a plan for it. The fix would\nbe to do the restriction before applying this optimization.\n\nI'm not sure pull-up-subquery can play any role here, IIUC, the bad thing\nhappens before pull-up-subquery.\n\nI also write & analyze more test and found no issue by me\n\n1. SELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n==> should not be pull-up to rarg of the left join since A.hundred is not\navailable.\n\n2. SELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = a.odd);\n==> should not be pull-up to rarg of the left join since A.odd is not\navailable.\n\n3. SELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n==> should be pull-up to rarg of left join.\n\n4. SELECT * FROM tenk1 A INNER JOIN tenk2 B\nON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n==> pull-up as expected.\n\n5. SELECT * FROM tenk1 A RIGHT JOIN tenk2 B\nON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n==> should not be pull-up into larg of left join since b.odd is not\navailable.\n\n\nAbout the existing test case changes because of this patch, they do\nrequires on the sublink is planned to a subPlan, so I introduces the below\nchanges to keep the original intention.\n\nChanges\nA in (SELECT A FROM ..)\nTo\n(A, random() > 0) in (SELECT a, random() > 0 FROM ..);\n\n\nI'm also wondering whether the similar restriction in\n> convert_EXISTS_sublink_to_join could be removed similarly.\n> In this light it was a mistake for convert_EXISTS_sublink_to_join\n> to manage the pullup itself rather than doing it in the two-step\n> fashion that convert_ANY_sublink_to_join does it.\n>\n>\nYes, it is true! I prefer to believe this deserves a separate patch.\n\nAny feedback is welcome!\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 5 Apr 2023 15:15:42 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi!\n\nI reviewed your patch and it was interesting for me!\n\nThank you for the explanation. It was really informative for me!\n\n>\n> I think we need the restriction and that should be enough for this feature\n> . Given the query Richard provided before:\n>\n> explain\n> select * from tenk1 A where exists\n> (select 1 from tenk2 B\n> where A.hundred in (select C.hundred FROM tenk2 C\n> WHERE c.odd = b.odd));\n>\n> It first can be converted to the below format without any issue.\n>\n> SELECT * FROM tenk1 A SEMI JOIN tenk2 B\n> on A.hundred in (select C.hundred FROM tenk2 C\n> WHERE c.odd = b.odd);\n>\n> Then without the restriction, since we only pull the varnos from\n> sublink->testexpr, then it is {A}, so it convert to\n>\n> SELECT * FROM\n> (tenk1 A SEMI JOIN LATERAL (SELECT c.hundred FROM tenk2 C)\n> ON c.odd = b.odd AND a.hundred = v.hundred)\n> SEMI JOIN on tenk2 B ON TRUE;\n>\n> then the above query is NOT A VALID QUERY since:\n> 1. The above query is *not* same as\n>\n> SELECT * FROM (tenk1 A SEMI JOIN tenk2 B) on true\n> SEMI JOIN LATERAL (SELECT c.hundred FROM tenk2 C) v\n> ON v.odd = b.odd;\n>\n> 2. The above query requires b.odd when B is not available. So it is\n> right that an optimizer can't generate a plan for it. The fix would\n> be to do the restriction before applying this optimization.\n>\n> I'm not sure pull-up-subquery can play any role here, IIUC, the bad thing\n> happens before pull-up-subquery.\n>\n> I also write & analyze more test and found no issue by me\n>\n> 1. SELECT * FROM tenk1 A LEFT JOIN tenk2 B\n> ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n> ==> should not be pull-up to rarg of the left join since A.hundred is not\n> available.\n>\n> 2. SELECT * FROM tenk1 A LEFT JOIN tenk2 B\n> ON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = a.odd);\n> ==> should not be pull-up to rarg of the left join since A.odd is not\n> available.\n>\n> 3. SELECT * FROM tenk1 A LEFT JOIN tenk2 B\n> ON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n> ==> should be pull-up to rarg of left join.\n>\n> 4. SELECT * FROM tenk1 A INNER JOIN tenk2 B\n> ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n> ==> pull-up as expected.\n>\n> 5. SELECT * FROM tenk1 A RIGHT JOIN tenk2 B\n> ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);\n> ==> should not be pull-up into larg of left join since b.odd is not\n> available.\n>\n>\nAfter reviewing, I want to suggest some changes related to the code and \ntests.\n\n\nFirst of all, I think, it would be better to \"treat\" change to \n\"consider\" and rewrite the pull-up check condition in two lines:\n\n/*\n * If the sub-select refers to any Vars of the parent query, we so let's\n * considering it as LATERAL. (Vars of higher levels don't matter here.)\n */\n\nuse_lateral = !bms_is_empty(sub_ref_outer_relids) &&\nbms_is_subset(sub_ref_outer_relids, available_rels);\n\nif (!use_lateral && !bms_is_empty(sub_ref_outer_relids))\n return NULL;\n\n\nSecondly, I noticed another interesting feature in your patch and I \nthink it could be added to the test.\n\nIf we get only one row from the aggregated subquery, we can pull-up it \nin the subquery scan filter.\n\npostgres=# explain (costs off)\nSELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n\n QUERY PLAN\n--------------------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Nested Loop\n -> Seq Scan on tenk2 b\n*-> Subquery Scan on \"ANY_subquery\"\n Filter: (b.hundred = \"ANY_subquery\".min)*\n -> Aggregate\n -> Seq Scan on tenk2 c\n Filter: (odd = b.odd)\n(10 rows)\n\nIt was impossible without your patch:\n\npostgres=# explain (costs off)\nSELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n QUERY PLAN\n---------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Seq Scan on tenk2 b\n Filter: (SubPlan 1)\n SubPlan 1\n -> Aggregate\n -> Seq Scan on tenk2 c\n Filter: (odd = b.odd)\n(9 rows)\n\n\nAnd I found an alternative query, when aggregated sublink will pull-up \ninto JoinExpr condition.\n\nexplain (costs off)\nSELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT count(c.hundred) FROM tenk2 C group by (c.odd));\n QUERY PLAN\n-------------------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Hash Semi Join\n*Hash Cond: (b.hundred = \"ANY_subquery\".count)*\n -> Seq Scan on tenk2 b\n -> Hash\n -> Subquery Scan on \"ANY_subquery\"\n -> HashAggregate\n Group Key: c.odd\n -> Seq Scan on tenk2 c\n(11 rows)\n\n\nUnfortunately, I found a request when sublink did not pull-up, as in the \nexamples above. I couldn't quite figure out why.\n\ncreate table a (x int, y int, z int, t int);\ncreate table b (x int, t int);\ncreate unique index on a (t, x);\ncreate index on b (t,x);\ninsert into a select id, id, id, id FROM generate_series(1,100000) As id;\ninsert into b select id, id FROM generate_series(1,1000) As id;\n\nexplain (analyze, costs off, buffers)\nselect b.x, b.x, a.y\nfrom b\n left join a\n on b.x=a.x and\n*b.t in\n (select max(a0.t) *\n from a a0\n where a0.x = b.x and\n a0.t = b.t);\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Hash Right Join (actual time=1.150..58.512 rows=1000 loops=1)\n Hash Cond: (a.x = b.x)\n*Join Filter: (SubPlan 2)*\n Buffers: shared hit=3546\n -> Seq Scan on a (actual time=0.023..15.798 rows=100000 loops=1)\n Buffers: shared hit=541\n -> Hash (actual time=1.038..1.042 rows=1000 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 72kB\n Buffers: shared hit=5\n -> Seq Scan on b (actual time=0.047..0.399 rows=1000 loops=1)\n Buffers: shared hit=5\n SubPlan 2\n -> Result (actual time=0.018..0.018 rows=1 loops=1000)\n Buffers: shared hit=3000\n InitPlan 1 (returns $2)\n -> Limit (actual time=0.015..0.016 rows=1 loops=1000)\n Buffers: shared hit=3000\n -> Index Only Scan using a_t_x_idx on a a0 (actual \ntime=0.014..0.014 rows=1 loops=1000)\n Index Cond: ((t IS NOT NULL) AND (t = b.t) AND \n(x = b.x))\n Heap Fetches: 1000\n Buffers: shared hit=3000\n Planning Time: 0.630 ms\n Execution Time: 58.941 ms\n(23 rows)\n\nI thought it would be:\n\nexplain (analyze, costs off, buffers)\nselect b.x, b.x, a.y\nfrom b\n left join a on\n b.x=a.x and\n*b.t =\n (select max(a0.t) *\n from a a0\n where a0.x = b.x and\n a0.t <= b.t);\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Hash Right Join (actual time=1.181..67.927 rows=1000 loops=1)\n Hash Cond: (a.x = b.x)\n*Join Filter: (b.t = (SubPlan 2))*\n Buffers: shared hit=3546\n -> Seq Scan on a (actual time=0.022..17.109 rows=100000 loops=1)\n Buffers: shared hit=541\n -> Hash (actual time=1.065..1.068 rows=1000 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 72kB\n Buffers: shared hit=5\n -> Seq Scan on b (actual time=0.049..0.401 rows=1000 loops=1)\n Buffers: shared hit=5\n SubPlan 2\n -> Result (actual time=0.025..0.025 rows=1 loops=1000)\n Buffers: shared hit=3000\n InitPlan 1 (returns $2)\n -> Limit (actual time=0.024..0.024 rows=1 loops=1000)\n Buffers: shared hit=3000\n -> Index Only Scan Backward using a_t_x_idx on a a0 \n(actual time=0.023..0.023 rows=1 loops=1000)\n Index Cond: ((t IS NOT NULL) AND (t <= b.t) \nAND (x = b.x))\n Heap Fetches: 1000\n Buffers: shared hit=3000\n Planning Time: 0.689 ms\n Execution Time: 68.220 ms\n(23 rows)\n\nIf you noticed, it became possible after replacing the \"in\" operator \nwith \"=\".\n\n\nI took the liberty of adding this to your patch and added myself as \nreviewer, if you don't mind.\n\n\n-- \nRegards,\nAlena Rybakina",
"msg_date": "Thu, 12 Oct 2023 00:01:37 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Alena,\n\nOn Thu, Oct 12, 2023 at 5:01 AM Alena Rybakina <lena.ribackina@yandex.ru>\nwrote:\n\n> Hi!\n>\n> I reviewed your patch and it was interesting for me!\n>\n> Thank you for the explanation. It was really informative for me!\n>\nThanks for your interest in this, and I am glad to know it is informative.\n\n> Unfortunately, I found a request when sublink did not pull-up, as in the\n>\nexamples above. I couldn't quite figure out why.\n>\nI'm not sure what you mean with the \"above\", I guess it should be the\n\"below\"?\n\n\n> explain (analyze, costs off, buffers)\n> select b.x, b.x, a.y\n> from b\n> left join a\n> on b.x=a.x and\n>\n> *b.t in (select max(a0.t) *\n> from a a0\n> where a0.x = b.x and\n> a0.t = b.t);\n>\n...\n\n> SubPlan 2\n>\n\nHere the sublink can't be pulled up because of its reference to\nthe LHS of left join, the original logic is that no matter the 'b.t in ..'\nreturns the true or false, the rows in LHS will be returned. If we\npull it up to LHS, some rows in LHS will be filtered out, which\nbreaks its original semantics.\n\nI thought it would be:\n>\n> explain (analyze, costs off, buffers)\n> select b.x, b.x, a.y\n> from b\n> left join a on\n> b.x=a.x and\n>\n> *b.t = (select max(a0.t) *\n> from a a0\n> where a0.x = b.x and\n> a0.t <= b.t);\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------\n> Hash Right Join (actual time=1.181..67.927 rows=1000 loops=1)\n> Hash Cond: (a.x = b.x)\n> *Join Filter: (b.t = (SubPlan 2))*\n> Buffers: shared hit=3546\n> -> Seq Scan on a (actual time=0.022..17.109 rows=100000 loops=1)\n> Buffers: shared hit=541\n> -> Hash (actual time=1.065..1.068 rows=1000 loops=1)\n> Buckets: 4096 Batches: 1 Memory Usage: 72kB\n> Buffers: shared hit=5\n> -> Seq Scan on b (actual time=0.049..0.401 rows=1000 loops=1)\n> Buffers: shared hit=5\n> SubPlan 2\n> -> Result (actual time=0.025..0.025 rows=1 loops=1000)\n> Buffers: shared hit=3000\n> InitPlan 1 (returns $2)\n> -> Limit (actual time=0.024..0.024 rows=1 loops=1000)\n> Buffers: shared hit=3000\n> -> Index Only Scan Backward using a_t_x_idx on a a0\n> (actual time=0.023..0.023 rows=1 loops=1000)\n> Index Cond: ((t IS NOT NULL) AND (t <= b.t) AND\n> (x = b.x))\n> Heap Fetches: 1000\n> Buffers: shared hit=3000\n> Planning Time: 0.689 ms\n> Execution Time: 68.220 ms\n> (23 rows)\n>\n> If you noticed, it became possible after replacing the \"in\" operator with\n> \"=\".\n>\nI didn't notice much difference between the 'in' and '=', maybe I\nmissed something?\n\n> I took the liberty of adding this to your patch and added myself as\n> reviewer, if you don't mind.\n>\nSure, the patch after your modification looks better than the original.\nI'm not sure how the test case around \"because of got one row\" is\nrelevant to the current changes. After we reach to some agreement\non the above discussion, I think v4 is good for committer to review!\n\n-- \nBest Regards\nAndy Fan\n\nHi Alena,On Thu, Oct 12, 2023 at 5:01 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n\nHi!\nI reviewed your patch and it was interesting for me!\n \nThank you for the explanation. It was really informative for me!Thanks for your interest in this, and I am glad to know it is informative. Unfortunately, I found a request when sublink did not pull-up, as\n in the examples above. I couldn't quite figure out why.I'm not sure what you mean with the \"above\", I guess it should be the \"below\"? explain (analyze, costs off, buffers) \n select b.x, b.x, a.y \n from b \n left join a \n on b.x=a.x and \n b.t in \n (select max(a0.t) \n from a a0 \n where a0.x = b.x and \n a0.t = b.t);... \n SubPlan 2Here the sublink can't be pulled up because of its reference to the LHS of left join, the original logic is that no matter the 'b.t in ..' returns the true or false, the rows in LHS will be returned. If wepull it up to LHS, some rows in LHS will be filtered out, which breaks its original semantics. I thought it would be:\nexplain (analyze, costs off, buffers) \n select b.x, b.x, a.y \n from b \n left join a on \n b.x=a.x and \n b.t = \n (select max(a0.t) \n from a a0 \n where a0.x = b.x and \n a0.t <= b.t);\n \n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------\n Hash Right Join (actual time=1.181..67.927 rows=1000 loops=1)\n Hash Cond: (a.x = b.x)\n Join Filter: (b.t = (SubPlan 2))\n Buffers: shared hit=3546\n -> Seq Scan on a (actual time=0.022..17.109 rows=100000\n loops=1)\n Buffers: shared hit=541\n -> Hash (actual time=1.065..1.068 rows=1000 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 72kB\n Buffers: shared hit=5\n -> Seq Scan on b (actual time=0.049..0.401 rows=1000\n loops=1)\n Buffers: shared hit=5\n SubPlan 2\n -> Result (actual time=0.025..0.025 rows=1 loops=1000)\n Buffers: shared hit=3000\n InitPlan 1 (returns $2)\n -> Limit (actual time=0.024..0.024 rows=1\n loops=1000)\n Buffers: shared hit=3000\n -> Index Only Scan Backward using a_t_x_idx\n on a a0 (actual time=0.023..0.023 rows=1 loops=1000)\n Index Cond: ((t IS NOT NULL) AND (t <=\n b.t) AND (x = b.x))\n Heap Fetches: 1000\n Buffers: shared hit=3000\n Planning Time: 0.689 ms\n Execution Time: 68.220 ms\n (23 rows)\nIf you noticed, it became possible after replacing the \"in\"\n operator with \"=\".I didn't notice much difference between the 'in' and '=', maybe I missed something? \nI took the liberty of adding this to your patch and added myself\n as reviewer, if you don't mind.Sure, the patch after your modification looks better than the original. I'm not sure how the test case around \"because of got one row\" isrelevant to the current changes. After we reach to some agreementon the above discussion, I think v4 is good for committer to review!-- Best RegardsAndy Fan",
"msg_date": "Thu, 12 Oct 2023 15:52:07 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On 12.10.2023 10:52, Andy Fan wrote:\n>\n> Unfortunately, I found a request when sublink did not pull-up, as\n> in the\n>\n> examples above. I couldn't quite figure out why.\n>\n> I'm not sure what you mean with the \"above\", I guess it should be the \n> \"below\"?\n\nYes, you are right)\n\n\n>\n> explain (analyze, costs off, buffers)\n> select b.x, b.x, a.y\n> from b\n> left join a\n> on b.x=a.x and\n> *b.t in\n> (select max(a0.t) *\n> from a a0\n> where a0.x = b.x and\n> a0.t = b.t);\n>\n> ...\n>\n> SubPlan 2\n>\n>\n> Here the sublink can't be pulled up because of its reference to\n> the LHS of left join, the original logic is that no matter the 'b.t \n> in ..'\n> returns the true or false, the rows in LHS will be returned. If we\n> pull it up to LHS, some rows in LHS will be filtered out, which\n> breaks its original semantics.\n\nThanks for the explanation, it became more clear to me here.\n\n\n> I thought it would be:\n>\n> explain (analyze, costs off, buffers)\n> select b.x, b.x, a.y\n> from b\n> left join a on\n> b.x=a.x and\n> *b.t =\n> (select max(a0.t) *\n> from a a0\n> where a0.x = b.x and\n> a0.t <= b.t);\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Hash Right Join (actual time=1.181..67.927 rows=1000 loops=1)\n> Hash Cond: (a.x = b.x)\n> *Join Filter: (b.t = (SubPlan 2))*\n> Buffers: shared hit=3546\n> -> Seq Scan on a (actual time=0.022..17.109 rows=100000 loops=1)\n> Buffers: shared hit=541\n> -> Hash (actual time=1.065..1.068 rows=1000 loops=1)\n> Buckets: 4096 Batches: 1 Memory Usage: 72kB\n> Buffers: shared hit=5\n> -> Seq Scan on b (actual time=0.049..0.401 rows=1000\n> loops=1)\n> Buffers: shared hit=5\n> SubPlan 2\n> -> Result (actual time=0.025..0.025 rows=1 loops=1000)\n> Buffers: shared hit=3000\n> InitPlan 1 (returns $2)\n> -> Limit (actual time=0.024..0.024 rows=1 loops=1000)\n> Buffers: shared hit=3000\n> -> Index Only Scan Backward using a_t_x_idx on\n> a a0 (actual time=0.023..0.023 rows=1 loops=1000)\n> Index Cond: ((t IS NOT NULL) AND (t <=\n> b.t) AND (x = b.x))\n> Heap Fetches: 1000\n> Buffers: shared hit=3000\n> Planning Time: 0.689 ms\n> Execution Time: 68.220 ms\n> (23 rows)\n>\n> If you noticed, it became possible after replacing the \"in\"\n> operator with \"=\".\n>\n> I didn't notice much difference between the 'in' and '=', maybe I\n> missed something?\n\nIt seems to me that the expressions \"=\" and \"IN\" are equivalent here due \nto the fact that the aggregated subquery returns only one value, and the \nresult with the \"IN\" operation can be considered as the intersection of \nelements on the left and right. In this query, we have some kind of set \non the left, among which there will be found or not only one element on \nthe right. In general, this expression can be considered as b=const, so \npush down will be applied to b and we can filter b during its scanning \nby the subquery's result.\nBut I think your explanation is necessary here, that this is all \npossible, because we can pull up the sublink here, since filtering is \nallowed on the right side (the nullable side) and does not break the \nsemantics of LHS. But in contrast, I also added two queries where \npull-up is impossible and it is not done here. Otherwise if filtering \nwas applied on the left it would be mistake.\n\nTo be honest, I'm not sure if this explanation is needed in the test \nanymore, so I didn't add it.\n\nexplain (costs off)\nSELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON A.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n QUERY PLAN\n-----------------------------------------------------------------\n Nested Loop Left Join\n Join Filter: (SubPlan 2)\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Seq Scan on tenk2 b\n SubPlan 2\n -> Result\n InitPlan 1 (returns $1)\n -> Limit\n -> Index Scan using tenk2_hundred on tenk2 c\n Index Cond: (hundred IS NOT NULL)\n Filter: (odd = b.odd)\n(12 rows)\n\nexplain (costs off)\nSELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON A.hundred in (SELECT count(c.hundred) FROM tenk2 C group by (c.odd));\n QUERY PLAN\n-----------------------------------\n Nested Loop Left Join\n Join Filter: (hashed SubPlan 1)\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Seq Scan on tenk2 b\n SubPlan 1\n -> HashAggregate\n Group Key: c.odd\n -> Seq Scan on tenk2 c\n(9 rows)\n\n\n> I took the liberty of adding this to your patch and added myself\n> as reviewer, if you don't mind.\n>\n> Sure, the patch after your modification looks better than the original.\n> I'm not sure how the test case around \"because of got one row\" is\n> relevant to the current changes. After we reach to some agreement\n> on the above discussion, I think v4 is good for committer to review!\n\nThank you!) I am ready to discuss it.\n\n-- \nRegards,\nAlena Rybakina",
"msg_date": "Fri, 13 Oct 2023 02:14:13 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": ">\n> It seems to me that the expressions \"=\" and \"IN\" are equivalent here due\n> to the fact that the aggregated subquery returns only one value, and the\n> result with the \"IN\" operation can be considered as the intersection of\n> elements on the left and right. In this query, we have some kind of set on\n> the left, among which there will be found or not only one element on the\n> right.\n>\n\nYes, they are equivalent at the final result, but there are some\ndifferences at the execution level. the '=' case will be transformed\nto a Subplan whose subPlanType is EXPR_SUBLINK, so if there\nis more than 1 rows is returned in the subplan, error will be raised.\n\nselect * from tenk1 where\n ten = (select ten from tenk1 i where i.two = tenk1.two );\n\nERROR: more than one row returned by a subquery used as an expression\n\nHowever the IN case would not.\nselect * from tenk1 where\n ten = (select ten from tenk1 i where i.two = tenk1.two ) is OK.\n\nI think the test case you added is not related to this feature. the\ndifference is there even without the patch. so I kept the code\nyou changed, but not for the test case.\n\nI took the liberty of adding this to your patch and added myself as\n>> reviewer, if you don't mind.\n>>\n> Sure, the patch after your modification looks better than the original.\n> I'm not sure how the test case around \"because of got one row\" is\n> relevant to the current changes. After we reach to some agreement\n> on the above discussion, I think v4 is good for committer to review!\n>\n>\n> Thank you!) I am ready to discuss it.\n>\n\nActually I meant to discuss the \"Unfortunately, I found a request..\", looks\nwe have reached an agreement there:)\n\n-- \nBest Regards\nAndy Fan\n\nIt seems to me that the expressions \"=\" and \"IN\" are equivalent\n here due to the fact that the aggregated subquery returns only one\n value, and the result with the \"IN\" operation can be considered as\n the intersection of elements on the left and right. In this query,\n we have some kind of set on the left, among which there will be\n found or not only one element on the right.Yes, they are equivalent at the final result, but there are some differences at the execution level. the '=' case will be transformedto a Subplan whose subPlanType is EXPR_SUBLINK, so if thereis more than 1 rows is returned in the subplan, error will be raised.select * from tenk1 where ten = (select ten from tenk1 i where i.two = tenk1.two );ERROR: more than one row returned by a subquery used as an expressionHowever the IN case would not. select * from tenk1 where ten = (select ten from tenk1 i where i.two = tenk1.two ) is OK. I think the test case you added is not related to this feature. the difference is there even without the patch. so I kept the codeyou changed, but not for the test case. \n\n\n\n\n\nI took the liberty of adding this to your patch and\n added myself as reviewer, if you don't mind.\n\n\nSure, the patch after your modification looks better than\n the original. \n\nI'm not sure how the test case around \"because of got one row\" is\nrelevant to the current changes. After we reach to some\n agreement\non the above discussion, I think v4 is good for committer\n to review!\n\n\n\n\n Thank you!) I am ready to discuss it. Actually I meant to discuss the \"Unfortunately, I found a request..\", lookswe have reached an agreement there:) -- Best RegardsAndy Fan",
"msg_date": "Fri, 13 Oct 2023 15:04:45 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi Tom,\n\nWould you like to have a look at this? The change is not big and the\noptimization has also been asked for many times. The attached is the\nv5 version and I also try my best to write a good commit message.\n\nHere is the commit fest entry:\n\nhttps://commitfest.postgresql.org/45/4268/",
"msg_date": "Fri, 13 Oct 2023 15:29:25 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On 13.10.2023 10:04, Andy Fan wrote:\n>\n> It seems to me that the expressions \"=\" and \"IN\" are equivalent\n> here due to the fact that the aggregated subquery returns only one\n> value, and the result with the \"IN\" operation can be considered as\n> the intersection of elements on the left and right. In this query,\n> we have some kind of set on the left, among which there will be\n> found or not only one element on the right.\n>\n>\n> Yes, they are equivalent at the final result, but there are some\n> differences at the execution level. the '=' case will be transformed\n> to a Subplan whose subPlanType is EXPR_SUBLINK, so if there\n> is more than 1 rows is returned in the subplan, error will be raised.\n>\n> select * from tenk1 where\n> ten = (select ten from tenk1 i where i.two = tenk1.two );\n>\n> ERROR: more than one row returned by a subquery used as an expression\n>\n> However the IN case would not.\n> select * from tenk1 where\n> ten = (select ten from tenk1 i where i.two = tenk1.two ) is OK.\n>\n> I think the test case you added is not related to this feature. the\n> difference is there even without the patch. so I kept the code\n> you changed, but not for the test case.\nYes, I understand and agree with you that we should delete the last \nqueries, except to one.\n\nThe query below have a different result compared to master, and it is \ncorrect.\n\n\nWithout your patch:\n\nexplain (costs off)\n+SELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Seq Scan on tenk2 b\n Filter: (SubPlan 2)\n SubPlan 2\n -> Result\n InitPlan 1 (returns $1)\n -> Limit\n -> Index Scan using tenk2_hundred on \ntenk2 c\n Index Cond: (hundred IS NOT NULL)\n Filter: (odd = b.odd)\n(12 rows)\n\n\nAfter your patch:\n\npostgres=# explain (costs off)\nSELECT * FROM tenk1 A LEFT JOIN tenk2 B\nON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n\n QUERY PLAN\n--------------------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on tenk1 a\n -> Materialize\n -> Nested Loop\n -> Seq Scan on tenk2 b\n*-> Subquery Scan on \"ANY_subquery\"\n Filter: (b.hundred = \"ANY_subquery\".min)*\n -> Aggregate\n -> Seq Scan on tenk2 c\n Filter: (odd = b.odd)\n(10 rows)\n\n>\n>> I took the liberty of adding this to your patch and added\n>> myself as reviewer, if you don't mind.\n>>\n>> Sure, the patch after your modification looks better than the\n>> original.\n>> I'm not sure how the test case around \"because of got one row\" is\n>> relevant to the current changes. After we reach to some agreement\n>> on the above discussion, I think v4 is good for committer to review!\n>\n> Thank you!) I am ready to discuss it.\n>\n> Actually I meant to discuss the \"Unfortunately, I found a request..\", \n> looks\n> we have reached an agreement there:)\n>\nYes, we have)\n\n-- \nRegards,\nAlena Rybakina",
"msg_date": "Fri, 13 Oct 2023 11:39:41 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On Fri, 13 Oct 2023 at 14:09, Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>\n> On 13.10.2023 10:04, Andy Fan wrote:\n>>\n>> It seems to me that the expressions \"=\" and \"IN\" are equivalent here due to the fact that the aggregated subquery returns only one value, and the result with the \"IN\" operation can be considered as the intersection of elements on the left and right. In this query, we have some kind of set on the left, among which there will be found or not only one element on the right.\n>\n>\n> Yes, they are equivalent at the final result, but there are some\n> differences at the execution level. the '=' case will be transformed\n> to a Subplan whose subPlanType is EXPR_SUBLINK, so if there\n> is more than 1 rows is returned in the subplan, error will be raised.\n>\n> select * from tenk1 where\n> ten = (select ten from tenk1 i where i.two = tenk1.two );\n>\n> ERROR: more than one row returned by a subquery used as an expression\n>\n> However the IN case would not.\n> select * from tenk1 where\n> ten = (select ten from tenk1 i where i.two = tenk1.two ) is OK.\n>\n>\n> I think the test case you added is not related to this feature. the\n> difference is there even without the patch. so I kept the code\n> you changed, but not for the test case.\n>\n> Yes, I understand and agree with you that we should delete the last queries, except to one.\n>\n> The query below have a different result compared to master, and it is correct.\n>\n>\n> Without your patch:\n>\n> explain (costs off)\n> +SELECT * FROM tenk1 A LEFT JOIN tenk2 B\n> ON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Nested Loop Left Join\n> -> Seq Scan on tenk1 a\n> -> Materialize\n> -> Seq Scan on tenk2 b\n> Filter: (SubPlan 2)\n> SubPlan 2\n> -> Result\n> InitPlan 1 (returns $1)\n> -> Limit\n> -> Index Scan using tenk2_hundred on tenk2 c\n> Index Cond: (hundred IS NOT NULL)\n> Filter: (odd = b.odd)\n> (12 rows)\n>\n>\n> After your patch:\n>\n> postgres=# explain (costs off)\n> SELECT * FROM tenk1 A LEFT JOIN tenk2 B\n> ON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);\n>\n> QUERY PLAN\n> --------------------------------------------------------------\n> Nested Loop Left Join\n> -> Seq Scan on tenk1 a\n> -> Materialize\n> -> Nested Loop\n> -> Seq Scan on tenk2 b\n> -> Subquery Scan on \"ANY_subquery\"\n> Filter: (b.hundred = \"ANY_subquery\".min)\n> -> Aggregate\n> -> Seq Scan on tenk2 c\n> Filter: (odd = b.odd)\n> (10 rows)\n>\n>\n>>> I took the liberty of adding this to your patch and added myself as reviewer, if you don't mind.\n>>\n>> Sure, the patch after your modification looks better than the original.\n>> I'm not sure how the test case around \"because of got one row\" is\n>> relevant to the current changes. After we reach to some agreement\n>> on the above discussion, I think v4 is good for committer to review!\n>>\n>>\n>> Thank you!) I am ready to discuss it.\n>\n>\n> Actually I meant to discuss the \"Unfortunately, I found a request..\", looks\n> we have reached an agreement there:)\n>\n> Yes, we have)\n\nHi Andy Fan,\n\nIf the changes of Alena are ok, can you merge the changes and post an\nupdated version so that CFBot can apply the patch and verify the\nchanges. As currently CFBot is trying to apply only Alena's changes\nand failing with the following at [1]:\n=== Applying patches on top of PostgreSQL commit ID\nfba2112b1569fd001a9e54dfdd73fd3cb8f16140 ===\n=== applying patch ./pull-up.diff\npatching file src/test/regress/expected/subselect.out\nHunk #1 succeeded at 1926 with fuzz 2 (offset -102 lines).\npatching file src/test/regress/sql/subselect.sql\nHunk #1 FAILED at 1000.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/test/regress/sql/subselect.sql.rej\n\n[1] - http://cfbot.cputube.org/patch_46_4268.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:16:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "Hi!\n\n> If the changes of Alena are ok, can you merge the changes and post an\n> updated version so that CFBot can apply the patch and verify the\n> changes. As currently CFBot is trying to apply only Alena's changes\n> and failing with the following at [1]:\n\nI think this is a very nice and pretty simple optimization. I've\nmerged the changes by Alena, and slightly revised the code changes in\nconvert_ANY_sublink_to_join(). I'm going to push this if there are no\nobjections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 13 Feb 2024 12:50:40 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "\nHi Alexander,\n\n> Hi!\n>\n>> If the changes of Alena are ok, can you merge the changes and post an\n>> updated version so that CFBot can apply the patch and verify the\n>> changes. As currently CFBot is trying to apply only Alena's changes\n>> and failing with the following at [1]:\n>\n> I think this is a very nice and pretty simple optimization. I've\n> merged the changes by Alena, and slightly revised the code changes in\n> convert_ANY_sublink_to_join(). I'm going to push this if there are no\n> objections.\n\nThanks for picking up this! I double checked the patch, it looks good to\nme. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Thu, 15 Feb 2024 17:51:28 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On 10/12/23 14:52, Andy Fan wrote:\n> Here the sublink can't be pulled up because of its reference to\n> the LHS of left join, the original logic is that no matter the 'b.t in ..'\n> returns the true or false, the rows in LHS will be returned. If we\n> pull it up to LHS, some rows in LHS will be filtered out, which\n> breaks its original semantics.\nHi,\nI spent some time trying to understand your sentence.\nI mean the following case:\n\nSELECT * FROM t1 LEFT JOIN t2\n ON t2.x IN (SELECT y FROM t3 WHERE t1.x=t3.x);\n\nI read [1,2,3], but I am still unsure why it is impossible in the case \nof OUTER JOIN. By setting the LATERAL clause, we forbid any clauses from \nthe RTE subquery to bubble up as a top-level clause and filter tuples \nfrom LHS, am I wrong? Does it need more research or you can show some \ncase to support your opinion - why this type of transformation must be \ndisallowed?\n\n[1] https://www.postgresql.org/message-id/6531.1218473967%40sss.pgh.pa.us\n[2] \nhttps://www.postgresql.org/message-id/BANLkTikGFtGnAaXVh5%3DntRdN%2B4w%2Br%3DNPuw%40mail.gmail.com\n[3] https://www.vldb.org/conf/1992/P091.PDF\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 1 Jul 2024 16:17:50 +0700",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
},
{
"msg_contents": "On 7/1/24 16:17, Andrei Lepikhov wrote:\n> On 10/12/23 14:52, Andy Fan wrote:\n>> Here the sublink can't be pulled up because of its reference to\n>> the LHS of left join, the original logic is that no matter the 'b.t \n>> in ..'\n>> returns the true or false, the rows in LHS will be returned. If we\n>> pull it up to LHS, some rows in LHS will be filtered out, which\n>> breaks its original semantics.\n> Hi,\n> I spent some time trying to understand your sentence.\n> I mean the following case:\n> \n> SELECT * FROM t1 LEFT JOIN t2\n> ON t2.x IN (SELECT y FROM t3 WHERE t1.x=t3.x);\n> \n> I read [1,2,3], but I am still unsure why it is impossible in the case \n> of OUTER JOIN. By setting the LATERAL clause, we forbid any clauses from \n> the RTE subquery to bubble up as a top-level clause and filter tuples \n> from LHS, am I wrong? Does it need more research or you can show some \n> case to support your opinion - why this type of transformation must be \n> disallowed?\n> \n> [1] https://www.postgresql.org/message-id/6531.1218473967%40sss.pgh.pa.us\n> [2] \n> https://www.postgresql.org/message-id/BANLkTikGFtGnAaXVh5%3DntRdN%2B4w%2Br%3DNPuw%40mail.gmail.com\n> [3] https://www.vldb.org/conf/1992/P091.PDF\n> \n\nI delved into it a bit more. After reading [4,5] I invented query that \nis analogue of the query above, but with manually pulled-up sublink:\n\nEXPLAIN (COSTS OFF)\nSELECT * FROM t1 LEFT JOIN t2 JOIN LATERAL\n(SELECT t1.x AS x1, y,x FROM t3) q1 ON (t2.x=q1.y AND q1.x1=q1.x) ON true;\n\nAnd you can see the plan:\n\n Nested Loop Left Join\n -> Seq Scan on t1\n -> Hash Join\n Hash Cond: (t2.x = t3.y)\n -> Seq Scan on t2\n -> Hash\n -> Seq Scan on t3\n Filter: (t1.x = x)\n\nJust for fun, I played with MSSQL Server and if I read its explain \ncorrectly, it also allows pulls-up sublink which mentions LHS:\n\n-------------------------------------\nNested Loops(Left Outer Join, OUTER REFERENCES:(t1.x))\n Table Scan(OBJECT:(t1))\n Hash Match(Right Semi Join, HASH:(t3.y)=(t2.x),\n\t\t\t\tRESIDUAL:(t2.x=t3.y))\n Table Scan(OBJECT:(t3), WHERE:(t1.x=t3.x))\n Table Scan(OBJECT:(t2))\n-------------------------------------\n\n(I cleaned MSSQL explain a little bit for clarity).\nSo, may we allow references to LHS in such sublink?\n\n[4] \nhttps://www.postgresql.org/message-id/flat/15523.1372190410%40sss.pgh.pa.us\n[5] \nhttps://www.postgresql.org/message-id/20130617235236.GA1636@jeremyevans.local\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Wed, 3 Jul 2024 15:33:44 +0700",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A new strategy for pull-up correlated ANY_SUBLINK"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile reviewing a different patch, I have noticed that guc-file.l\nincludes sys/stat.h in the middle of the PG internal headers. The\nusual practice is to have first postgres[_fe].h, followed by the\nsystem headers and finally the internal headers. That's a nit, but\nall the other files do that.\n\n{be,fe}-secure-openssl.c include some exceptions though, as documented\nthere.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 2 Nov 2022 14:29:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Incorrect include file order in guc-file.l"
},
{
"msg_contents": "Hi,\n\nOn Wed, Nov 02, 2022 at 02:29:50PM +0900, Michael Paquier wrote:\n>\n> While reviewing a different patch, I have noticed that guc-file.l\n> includes sys/stat.h in the middle of the PG internal headers. The\n> usual practice is to have first postgres[_fe].h, followed by the\n> system headers and finally the internal headers. That's a nit, but\n> all the other files do that.\n>\n> {be,fe}-secure-openssl.c include some exceptions though, as documented\n> there.\n\nAgreed, it's apparently an oversight in dac048f71eb. +1 for the patch.\n\n\n",
"msg_date": "Wed, 2 Nov 2022 14:01:05 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Nov 02, 2022 at 02:29:50PM +0900, Michael Paquier wrote:\n> >\n> > While reviewing a different patch, I have noticed that guc-file.l\n> > includes sys/stat.h in the middle of the PG internal headers. The\n> > usual practice is to have first postgres[_fe].h, followed by the\n> > system headers and finally the internal headers. That's a nit, but\n> > all the other files do that.\n> >\n> > {be,fe}-secure-openssl.c include some exceptions though, as documented\n> > there.\n>\n> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.\n\nYeah. +1, thanks.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:>> Hi,>> On Wed, Nov 02, 2022 at 02:29:50PM +0900, Michael Paquier wrote:> >> > While reviewing a different patch, I have noticed that guc-file.l> > includes sys/stat.h in the middle of the PG internal headers. The> > usual practice is to have first postgres[_fe].h, followed by the> > system headers and finally the internal headers. That's a nit, but> > all the other files do that.> >> > {be,fe}-secure-openssl.c include some exceptions though, as documented> > there.>> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.Yeah. +1, thanks.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 2 Nov 2022 13:53:21 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Nov 02, 2022 at 02:29:50PM +0900, Michael Paquier wrote:\n> >\n> > While reviewing a different patch, I have noticed that guc-file.l\n> > includes sys/stat.h in the middle of the PG internal headers. The\n> > usual practice is to have first postgres[_fe].h, followed by the\n> > system headers and finally the internal headers. That's a nit, but\n> > all the other files do that.\n> >\n> > {be,fe}-secure-openssl.c include some exceptions though, as documented\n> > there.\n>\n> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.\n\nI've pushed this, thanks!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:>> Hi,>> On Wed, Nov 02, 2022 at 02:29:50PM +0900, Michael Paquier wrote:> >> > While reviewing a different patch, I have noticed that guc-file.l> > includes sys/stat.h in the middle of the PG internal headers. The> > usual practice is to have first postgres[_fe].h, followed by the> > system headers and finally the internal headers. That's a nit, but> > all the other files do that.> >> > {be,fe}-secure-openssl.c include some exceptions though, as documented> > there.>> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.I've pushed this, thanks!--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Nov 2022 12:40:19 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 12:40:19PM +0700, John Naylor wrote:\n> On Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.\n> \n> I've pushed this, thanks!\n\nThanks for the commit. I've wanted to get it done yesterday but life\ntook over faster than that. Before committing the change, there is\nsomething I have noticed though: this header does not seem to be\nnecessary at all and it looks that there is nothing in guc-file.l that\nneeds it. Why did you add it in dac048f to begin with?\n--\nMichael",
"msg_date": "Thu, 3 Nov 2022 20:40:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 6:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 03, 2022 at 12:40:19PM +0700, John Naylor wrote:\n> > On Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com>\nwrote:\n> >> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.\n> >\n> > I've pushed this, thanks!\n>\n> Thanks for the commit. I've wanted to get it done yesterday but life\n> took over faster than that. Before committing the change, there is\n> something I have noticed though: this header does not seem to be\n> necessary at all and it looks that there is nothing in guc-file.l that\n> needs it. Why did you add it in dac048f to begin with?\n\nBecause it wouldn't compile otherwise, obviously. :-)\n\nI must have been working on it before bfb9dfd93720\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Nov 3, 2022 at 6:40 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Thu, Nov 03, 2022 at 12:40:19PM +0700, John Naylor wrote:> > On Wed, Nov 2, 2022 at 1:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:> >> Agreed, it's apparently an oversight in dac048f71eb. +1 for the patch.> >> > I've pushed this, thanks!>> Thanks for the commit. I've wanted to get it done yesterday but life> took over faster than that. Before committing the change, there is> something I have noticed though: this header does not seem to be> necessary at all and it looks that there is nothing in guc-file.l that> needs it. Why did you add it in dac048f to begin with?Because it wouldn't compile otherwise, obviously. :-)I must have been working on it before bfb9dfd93720--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Nov 2022 19:19:07 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 07:19:07PM +0700, John Naylor wrote:\n> Because it wouldn't compile otherwise, obviously. :-)\n> \n> I must have been working on it before bfb9dfd93720\n\nHehe, my fault then ;p\n\nThe CI is able to complete without it. Would you mind if it is\nremoved? If you don't want us to poke more at the bear, that's a nit\nso leaving things as they are is also fine by me.\n--\nMichael",
"msg_date": "Fri, 4 Nov 2022 07:42:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 5:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> The CI is able to complete without it. Would you mind if it is\n> removed? If you don't want us to poke more at the bear, that's a nit\n> so leaving things as they are is also fine by me.\n\nI've removed it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Nov 4, 2022 at 5:42 AM Michael Paquier <michael@paquier.xyz> wrote:>> The CI is able to complete without it. Would you mind if it is> removed? If you don't want us to poke more at the bear, that's a nit> so leaving things as they are is also fine by me.I've removed it.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Nov 2022 07:55:20 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 07:55:20AM +0700, John Naylor wrote:\n> I've removed it.\n\nThanks.\n\nAha, there were three more of these, as of rewriteheap.c, copydir.c\nand pgtz.c that I also forgot to clean up in bfb9dfd..\n--\nMichael",
"msg_date": "Sat, 5 Nov 2022 12:34:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect include file order in guc-file.l"
}
] |
[
{
"msg_contents": "add spinlock support on loongarch64.",
"msg_date": "Wed, 2 Nov 2022 05:56:36 +0000",
"msg_from": "=?gb2312?B?zuLRx7fJ?= <wuyf41619@hundsun.com>",
"msg_from_op": true,
"msg_subject": "spinlock support on loongarch64"
},
{
"msg_contents": "=?gb2312?B?zuLRx7fJ?= <wuyf41619@hundsun.com> writes:\n> add spinlock support on loongarch64.\n\nI wonder if we shouldn't just do that (ie, try to use\n__sync_lock_test_and_set) as a generic fallback on any unsupported\narchitecture. We could get rid of the separate stanza for RISC-V\nthat way. The main thing that an arch-specific stanza could bring\nis knowledge of the best data type width to use for a spinlock;\nbut I don't see a big problem with defaulting to \"int\". We can\nalways add arch-specific stanzas for any machines where that's\nshown to be a seriously poor choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Nov 2022 11:37:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 11:37:35 -0400, Tom Lane wrote:\n> =?gb2312?B?zuLRx7fJ?= <wuyf41619@hundsun.com> writes:\n> > add spinlock support on loongarch64.\n> \n> I wonder if we shouldn't just do that (ie, try to use\n> __sync_lock_test_and_set) as a generic fallback on any unsupported\n> architecture. We could get rid of the separate stanza for RISC-V\n> that way. The main thing that an arch-specific stanza could bring\n> is knowledge of the best data type width to use for a spinlock;\n> but I don't see a big problem with defaulting to \"int\". We can\n> always add arch-specific stanzas for any machines where that's\n> shown to be a seriously poor choice.\n\nYes, please. It might not be perfect for all architectures, and it might not\nbe good for some very old architectures. But for anything new it'll be vastly\nbetter than not having spinlocks at all.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Nov 2022 10:27:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-02 11:37:35 -0400, Tom Lane wrote:\n>> I wonder if we shouldn't just do that (ie, try to use\n>> __sync_lock_test_and_set) as a generic fallback on any unsupported\n>> architecture. We could get rid of the separate stanza for RISC-V\n>> that way. The main thing that an arch-specific stanza could bring\n>> is knowledge of the best data type width to use for a spinlock;\n>> but I don't see a big problem with defaulting to \"int\". We can\n>> always add arch-specific stanzas for any machines where that's\n>> shown to be a seriously poor choice.\n\n> Yes, please. It might not be perfect for all architectures, and it might not\n> be good for some very old architectures. But for anything new it'll be vastly\n> better than not having spinlocks at all.\n\nSo about like this, then.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 02 Nov 2022 14:29:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
},
{
"msg_contents": "I wrote:\n> So about like this, then.\n\nAfter actually testing (by removing the ARM stanza on a macOS machine),\nit seems that placement doesn't work, because of the default definition\nof S_UNLOCK at the bottom of the \"#if defined(__GNUC__)\" stuff. Putting\nit inside that test works, and seems like it should be fine, since this\nis a GCC-ism.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 02 Nov 2022 14:55:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 14:55:04 -0400, Tom Lane wrote:\n> I wrote:\n> > So about like this, then.\n> \n> After actually testing (by removing the ARM stanza on a macOS machine),\n> it seems that placement doesn't work, because of the default definition\n> of S_UNLOCK at the bottom of the \"#if defined(__GNUC__)\" stuff. Putting\n> it inside that test works, and seems like it should be fine, since this\n> is a GCC-ism.\n\nLooks reasonable. I tested it on x86-64 by disabling that section and it\nworks.\n\nFWIW, In a heavily spinlock-contending workload it's a tad slower, largely due\nto to loosing spin_delay. If I define that it's very close. Not that it\nmatters hugely, I just thought it'd be good to validate.\n\nI wonder if it's worth keeing the full copy of this in the arm section? We\ncould just define SPIN_DELAY() for aarch64?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Nov 2022 14:04:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-02 14:55:04 -0400, Tom Lane wrote:\n>> After actually testing (by removing the ARM stanza on a macOS machine),\n>> it seems that placement doesn't work, because of the default definition\n>> of S_UNLOCK at the bottom of the \"#if defined(__GNUC__)\" stuff. Putting\n>> it inside that test works, and seems like it should be fine, since this\n>> is a GCC-ism.\n\n> Looks reasonable. I tested it on x86-64 by disabling that section and it\n> works.\n\nThanks for looking.\n\n> I wonder if it's worth keeing the full copy of this in the arm section? We\n> could just define SPIN_DELAY() for aarch64?\n\nI thought about that, but given the increasing popularity of ARM\nI bet that that stanza is going to accrete more special-case knowledge\nover time. It's probably simplest to keep it separate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Nov 2022 17:37:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
},
{
"msg_contents": "On 2022-11-02 17:37:04 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-02 14:55:04 -0400, Tom Lane wrote:\n> >> After actually testing (by removing the ARM stanza on a macOS machine),\n> >> it seems that placement doesn't work, because of the default definition\n> >> of S_UNLOCK at the bottom of the \"#if defined(__GNUC__)\" stuff. Putting\n> >> it inside that test works, and seems like it should be fine, since this\n> >> is a GCC-ism.\n> \n> > Looks reasonable. I tested it on x86-64 by disabling that section and it\n> > works.\n> \n> Thanks for looking.\n> \n> > I wonder if it's worth keeing the full copy of this in the arm section? We\n> > could just define SPIN_DELAY() for aarch64?\n> \n> I thought about that, but given the increasing popularity of ARM\n> I bet that that stanza is going to accrete more special-case knowledge\n> over time. It's probably simplest to keep it separate.\n\nWFM.\n\n\n",
"msg_date": "Wed, 2 Nov 2022 16:22:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: spinlock support on loongarch64"
}
] |
[
{
"msg_contents": "It's been known for a while that Postgres spends a lot of time translating\ninstruction addresses, and using huge pages in the text segment yields a\nsubstantial performance boost in OLTP workloads [1][2]. The difficulty is,\nthis normally requires a lot of painstaking work (unless your OS does\nsuperpage promotion, like FreeBSD).\n\nI found an MIT-licensed library \"iodlr\" from Intel [3] that allows one to\nremap the .text segment to huge pages at program start. Attached is a\nhackish, Meson-only, \"works on my machine\" patchset to experiment with this\nidea.\n\n0001 adapts the library to our error logging and GUC system. The overview:\n\n- read ELF info to get the start/end addresses of the .text segment\n- calculate addresses therein aligned at huge page boundaries\n- mmap a temporary region and memcpy the aligned portion of the .text\nsegment\n- mmap aligned start address to a second region with huge pages and\nMAP_FIXED\n- memcpy over from the temp region and revoke the PROT_WRITE bit\n\nThe reason this doesn't \"saw off the branch you're standing on\" is that the\nremapping is done in a function that's forced to live in a different\nsegment, and doesn't call any non-libc functions living elsewhere:\n\nstatic void\n__attribute__((__section__(\"lpstub\")))\n__attribute__((__noinline__))\nMoveRegionToLargePages(const mem_range * r, int mmap_flags)\n\nDebug messages show\n\n2022-11-02 12:02:31.064 +07 [26955] DEBUG: .text start: 0x487540\n2022-11-02 12:02:31.064 +07 [26955] DEBUG: .text end: 0x96cf12\n2022-11-02 12:02:31.064 +07 [26955] DEBUG: aligned .text start: 0x600000\n2022-11-02 12:02:31.064 +07 [26955] DEBUG: aligned .text end: 0x800000\n2022-11-02 12:02:31.066 +07 [26955] DEBUG: binary mapped to huge pages\n2022-11-02 12:02:31.066 +07 [26955] DEBUG: un-mmapping temporary code\nregion\n\nHere, out of 5MB of Postgres text, only 1 huge page can be used, but that\nstill saves 512 entries in the TLB and might bring a small improvement. The\nun-remapped region below 0x600000 contains the ~600kB of \"cold\" code, since\nthe linker puts the cold section first, at least recent versions of ld and\nlld.\n\n0002 is my attempt to force the linker's hand and get the entire text\nsegment mapped to huge pages. It's quite a finicky hack, and easily broken\n(see below). That said, it still builds easily within our normal build\nprocess, and maybe there is a better way to get the effect.\n\nIt does two things:\n\n- Pass the linker -Wl,-zcommon-page-size=2097152\n-Wl,-zmax-page-size=2097152 which aligns .init to a 2MB boundary. That's\ndone for predictability, but that means the next 2MB boundary is very\nnearly 2MB away.\n\n- Add a \"cold\" __asm__ filler function that just takes up space, enough to\npush the end of the .text segment over the next aligned boundary, or to\n~8MB in size.\n\nIn a non-assert build:\n\n0001:\n\n$ bloaty inst-perf/bin/postgres\n\n FILE SIZE VM SIZE\n -------------- --------------\n 53.7% 4.90Mi 58.7% 4.90Mi .text\n...\n 100.0% 9.12Mi 100.0% 8.35Mi TOTAL\n\n$ readelf -S --wide inst-perf/bin/postgres\n\n [Nr] Name Type Address Off Size ES\nFlg Lk Inf Al\n...\n [12] .init PROGBITS 0000000000486000 086000 00001b 00\n AX 0 0 4\n [13] .plt PROGBITS 0000000000486020 086020 001520 10\n AX 0 0 16\n [14] .text PROGBITS 0000000000487540 087540 4e59d2 00\n AX 0 0 16\n...\n\n0002:\n\n$ bloaty inst-perf/bin/postgres\n\n FILE SIZE VM SIZE\n -------------- --------------\n 46.9% 8.00Mi 69.9% 8.00Mi .text\n...\n 100.0% 17.1Mi 100.0% 11.4Mi TOTAL\n\n\n$ readelf -S --wide inst-perf/bin/postgres\n\n [Nr] Name Type Address Off Size ES\nFlg Lk Inf Al\n...\n [12] .init PROGBITS 0000000000600000 200000 00001b 00\n AX 0 0 4\n [13] .plt PROGBITS 0000000000600020 200020 001520 10\n AX 0 0 16\n [14] .text PROGBITS 0000000000601540 201540 7ff512 00\n AX 0 0 16\n...\n\nDebug messages with 0002 shows 6MB mapped:\n\n2022-11-02 12:35:28.482 +07 [28530] DEBUG: .text start: 0x601540\n2022-11-02 12:35:28.482 +07 [28530] DEBUG: .text end: 0xe00a52\n2022-11-02 12:35:28.482 +07 [28530] DEBUG: aligned .text start: 0x800000\n2022-11-02 12:35:28.482 +07 [28530] DEBUG: aligned .text end: 0xe00000\n2022-11-02 12:35:28.486 +07 [28530] DEBUG: binary mapped to huge pages\n2022-11-02 12:35:28.486 +07 [28530] DEBUG: un-mmapping temporary code\nregion\n\nSince the front is all-cold, and there is very little at the end,\npractically all hot pages are now remapped. The biggest problem with the\nhackish filler function (in addition to maintainability) is, if explicit\nhuge pages are turned off in the kernel, attempting mmap() with MAP_HUGETLB\ncauses complete startup failure if the .text segment is larger than 8MB. I\nhaven't looked into what's happening there yet, but I didn't want to get\ntoo far in the weeds before getting feedback on whether the entire approach\nin this thread is sound enough to justify working further on.\n\n[1] https://www.cs.rochester.edu/u/sandhya/papers/ispass19.pdf\n (paper: \"On the Impact of Instruction Address Translation Overhead\")\n[2] https://twitter.com/AndresFreundTec/status/1214305610172289024\n[3] https://github.com/intel/iodlr\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 2 Nov 2022 13:32:37 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 13:32:37 +0700, John Naylor wrote:\n> It's been known for a while that Postgres spends a lot of time translating\n> instruction addresses, and using huge pages in the text segment yields a\n> substantial performance boost in OLTP workloads [1][2].\n\nIndeed. Some of that we eventually should address by making our code less\n\"jumpy\", but that's a large amount of work and only going to go so far.\n\n\n> The difficulty is,\n> this normally requires a lot of painstaking work (unless your OS does\n> superpage promotion, like FreeBSD).\n\nI still am confused by FreeBSD being able to do this without changing the\nsection alignment to be big enough. Or is the default alignment on FreeBSD\nlarge enough already?\n\n\n> I found an MIT-licensed library \"iodlr\" from Intel [3] that allows one to\n> remap the .text segment to huge pages at program start. Attached is a\n> hackish, Meson-only, \"works on my machine\" patchset to experiment with this\n> idea.\n\nI wonder how far we can get with just using the linker hints to align\nsections. I know that the linux folks are working on promoting sufficiently\naligned executable pages to huge pages too, and might have succeeded already.\n\nIOW, adding the linker flags might be a good first step.\n\n\n> 0001 adapts the library to our error logging and GUC system. The overview:\n> \n> - read ELF info to get the start/end addresses of the .text segment\n> - calculate addresses therein aligned at huge page boundaries\n> - mmap a temporary region and memcpy the aligned portion of the .text\n> segment\n> - mmap aligned start address to a second region with huge pages and\n> MAP_FIXED\n> - memcpy over from the temp region and revoke the PROT_WRITE bit\n\nWould mremap()'ing the temporary region also work? That might be simpler and\nmore robust (you'd see the MAP_HUGETLB failure before doing anything\nirreversible). And you then might not even need this:\n\n> The reason this doesn't \"saw off the branch you're standing on\" is that the\n> remapping is done in a function that's forced to live in a different\n> segment, and doesn't call any non-libc functions living elsewhere:\n> \n> static void\n> __attribute__((__section__(\"lpstub\")))\n> __attribute__((__noinline__))\n> MoveRegionToLargePages(const mem_range * r, int mmap_flags)\n\n\nThis would likely need a bunch more gating than the patch, understandably,\nhas. I think it'd faily horribly if there were .text relocations, for example?\nI think there are some architectures that do that by default...\n\n\n> 0002 is my attempt to force the linker's hand and get the entire text\n> segment mapped to huge pages. It's quite a finicky hack, and easily broken\n> (see below). That said, it still builds easily within our normal build\n> process, and maybe there is a better way to get the effect.\n> \n> It does two things:\n> \n> - Pass the linker -Wl,-zcommon-page-size=2097152\n> -Wl,-zmax-page-size=2097152 which aligns .init to a 2MB boundary. That's\n> done for predictability, but that means the next 2MB boundary is very\n> nearly 2MB away.\n\nYep. FWIW, my notes say\n\n# align sections to 2MB boundaries for hugepage support\n# bfd and gold linkers:\n# -Wl,-zmax-page-size=0x200000 -Wl,-zcommon-page-size=0x200000\n# lld:\n# -Wl,-zmax-page-size=0x200000 -Wl,-z,separate-loadable-segments\n# then copy binary to tmpfs mounted with -o huge=always\n\nI.e. with lld you need slightly different flags -Wl,-z,separate-loadable-segments\n\nThe meson bit should probably just use\ncc.get_supported_link_arguments([\n '-Wl,-zmax-page-size=0x200000',\n '-Wl,-zcommon-page-size=0x200000',\n '-Wl,-zseparate-loadable-segments'])\n\nAfaict there's really no reason to not do that by default, allowing kernels\nthat can promote to huge pages to do so.\n\n\nMy approach to forcing huge pages to be used was to then:\n\n# copy binary to tmpfs mounted with -o huge=always\n\n\n> - Add a \"cold\" __asm__ filler function that just takes up space, enough to\n> push the end of the .text segment over the next aligned boundary, or to\n> ~8MB in size.\n\nI don't understand why this is needed - as long as the pages are aligned to\n2MB, why do we need to fill things up on disk? The in-memory contents are the\nrelevant bit, no?\n\n\n> Since the front is all-cold, and there is very little at the end,\n> practically all hot pages are now remapped. The biggest problem with the\n> hackish filler function (in addition to maintainability) is, if explicit\n> huge pages are turned off in the kernel, attempting mmap() with MAP_HUGETLB\n> causes complete startup failure if the .text segment is larger than 8MB.\n\nI would expect MAP_HUGETLB to always fail if not enabled in the kernel,\nindependent of the .text segment size?\n\n\n\n> +/* Callback for dl_iterate_phdr to set the start and end of the .text segment */\n> +static int\n> +FindMapping(struct dl_phdr_info *hdr, size_t size, void *data)\n> +{\n> +\tElfW(Shdr) text_section;\n> +\tFindParams *find_params = (FindParams *) data;\n> +\n> +\t/*\n> +\t * We are only interested in the mapping matching the main executable.\n> +\t * This has the empty string for a name.\n> +\t */\n> +\tif (hdr->dlpi_name[0] != '\\0')\n> +\t\treturn 0;\n> +\n\nIt's not entirely clear we'd only ever want to do this for the main\nexecutable. E.g. plpgsql could also benefit.\n\n\n> diff --git a/meson.build b/meson.build\n> index bfacbdc0af..450946370c 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -239,6 +239,9 @@ elif host_system == 'freebsd'\n> elif host_system == 'linux'\n> sema_kind = 'unnamed_posix'\n> cppflags += '-D_GNU_SOURCE'\n> + # WIP: debug builds are huge\n> + # TODO: add portability check\n> + ldflags += ['-Wl,-zcommon-page-size=2097152', '-Wl,-zmax-page-size=2097152']\n\nWhat's that WIP about?\n\n\n> elif host_system == 'netbsd'\n> # We must resolve all dynamic linking in the core server at program start.\n> diff --git a/src/backend/port/filler.c b/src/backend/port/filler.c\n> new file mode 100644\n> index 0000000000..de4e33bb05\n> --- /dev/null\n> +++ b/src/backend/port/filler.c\n> @@ -0,0 +1,29 @@\n> +/*\n> + * Add enough padding to .text segment to bring the end just\n> + * past a 2MB alignment boundary. In practice, this means .text needs\n> + * to be at least 8MB. It shouldn't be much larger than this,\n> + * because then more hot pages will remain in 4kB pages.\n> + *\n> + * FIXME: With this filler added, if explicit huge pages are turned off\n> + * in the kernel, attempting mmap() with MAP_HUGETLB causes a crash\n> + * instead of reporting failure if the .text segment is larger than 8MB.\n> + *\n> + * See MapStaticCodeToLargePages() in large_page.c\n> + *\n> + * XXX: The exact amount of filler must be determined experimentally\n> + * on platforms of interest, in non-assert builds.\n> + *\n> + */\n> +static void\n> +__attribute__((used))\n> +__attribute__((cold))\n> +fill_function(int x)\n> +{\n> +\t/* TODO: More architectures */\n> +#ifdef __x86_64__\n> +__asm__(\n> +\t\".fill 3251000\"\n> +);\n> +#endif\n> +\t(void) x;\n> +}\n> \\ No newline at end of file\n> diff --git a/src/backend/port/meson.build b/src/backend/port/meson.build\n> index 5ab65115e9..d876712e0c 100644\n> --- a/src/backend/port/meson.build\n> +++ b/src/backend/port/meson.build\n> @@ -16,6 +16,9 @@ if cdata.has('USE_WIN32_SEMAPHORES')\n> endif\n> \n> if cdata.has('USE_SYSV_SHARED_MEMORY')\n> + if host_system == 'linux'\n> + backend_sources += files('filler.c')\n> + endif\n> backend_sources += files('large_page.c')\n> backend_sources += files('sysv_shmem.c')\n> endif\n> -- \n> 2.37.3\n> \n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Nov 2022 10:21:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nThis nerd-sniped me badly :)\n\nOn 2022-11-03 10:21:23 -0700, Andres Freund wrote:\n> On 2022-11-02 13:32:37 +0700, John Naylor wrote:\n> > I found an MIT-licensed library \"iodlr\" from Intel [3] that allows one to\n> > remap the .text segment to huge pages at program start. Attached is a\n> > hackish, Meson-only, \"works on my machine\" patchset to experiment with this\n> > idea.\n>\n> I wonder how far we can get with just using the linker hints to align\n> sections. I know that the linux folks are working on promoting sufficiently\n> aligned executable pages to huge pages too, and might have succeeded already.\n>\n> IOW, adding the linker flags might be a good first step.\n\nIndeed, I did see that that works to some degree on the 5.19 kernel I was\nrunning. However, it never seems to get around to using huge pages\nsufficiently to compete with explicit use of huge pages.\n\nMore interestingly, a few days ago, a new madvise hint, MADV_COLLAPSE, was\nadded into linux 6.1. That explicitly remaps a region and uses huge pages for\nit. Of course that's going to take a while to be widely available, but it\nseems like a safer approach than the remapping approach from this thread.\n\nI hacked in a MADV_COLLAPSE (with setarch -R, so that I could just hardcode\nthe address / length), and it seems to work nicely.\n\nWith the weird caveat that on fs one needs to make sure that the executable\ndoesn't reflinks to reuse parts of other files, and that the mold linker and\ncp do... Not a concern on ext4, but on xfs. I took to copying the postgres\nbinary with cp --reflink=never\n\n\nFWIW, you can see the state of the page mapping in more detail with the\nkernel's page-types tool\n\nsudo /home/andres/src/kernel/tools/vm/page-types -L -p 12297 -a 0x555555800,0x555556122\nsudo /home/andres/src/kernel/tools/vm/page-types -f /srv/dev/build/m-opt/src/backend/postgres2\n\n\nPerf results:\n\nc=150;psql -f ~/tmp/prewarm.sql;perf stat -a -e cycles,iTLB-loads,iTLB-load-misses,itlb_misses.walk_active,itlb_misses.walk_completed_4k,itlb_misses.walk_completed_2m_4m,itlb_misses.walk_completed_1g pgbench -n -M prepared -S -P1 -c$c -j$c -T10\n\nwithout MADV_COLLAPSE:\n\ntps = 1038230.070771 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,184,344,476,152 cycles (71.41%)\n 2,846,146,710 iTLB-loads (71.43%)\n 2,021,885,782 iTLB-load-misses # 71.04% of all iTLB cache accesses (71.44%)\n 75,633,850,933 itlb_misses.walk_active (71.44%)\n 2,020,962,930 itlb_misses.walk_completed_4k (71.44%)\n 1,213,368 itlb_misses.walk_completed_2m_4m (57.12%)\n 2,293 itlb_misses.walk_completed_1g (57.11%)\n\n 10.064352587 seconds time elapsed\n\n\n\nwith MADV_COLLAPSE:\n\ntps = 1113717.114278 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,173,049,140,611 cycles (71.42%)\n 1,059,224,678 iTLB-loads (71.44%)\n 653,603,712 iTLB-load-misses # 61.71% of all iTLB cache accesses (71.44%)\n 26,135,902,949 itlb_misses.walk_active (71.44%)\n 628,314,285 itlb_misses.walk_completed_4k (71.44%)\n 25,462,916 itlb_misses.walk_completed_2m_4m (57.13%)\n 2,228 itlb_misses.walk_completed_1g (57.13%)\n\nNote that while the rate of itlb-misses stays roughly the same, the total\nnumber of iTLB loads reduced substantially, and the number of cycles in which\nan itlb miss was in progress is 1/3 of what it was before.\n\n\nA lot of the remaining misses are from the context switches. The iTLB is\nflushed on context switches, and of course pgbench -S is extremely context\nswitch heavy.\n\nComparing plain -S with 10 pipelined -S transactions (using -t 100000 / -t\n10000 to compare the same amount of work) I get:\n\n\nwithout MADV_COLLAPSE:\n\nnot pipelined:\n\ntps = 1037732.722805 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,691,411,678,007 cycles (62.48%)\n 8,856,107 itlb.itlb_flush (62.48%)\n 4,600,041,062 iTLB-loads (62.48%)\n 2,598,218,236 iTLB-load-misses # 56.48% of all iTLB cache accesses (62.50%)\n 100,095,862,126 itlb_misses.walk_active (62.53%)\n 2,595,376,025 itlb_misses.walk_completed_4k (50.02%)\n 2,558,713 itlb_misses.walk_completed_2m_4m (50.00%)\n 2,146 itlb_misses.walk_completed_1g (49.98%)\n\n 14.582927646 seconds time elapsed\n\n\npipelined:\n\ntps = 161947.008995 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,095,948,341,745 cycles (62.46%)\n 877,556 itlb.itlb_flush (62.46%)\n 4,576,237,561 iTLB-loads (62.48%)\n 307,971,166 iTLB-load-misses # 6.73% of all iTLB cache accesses (62.52%)\n 15,565,279,213 itlb_misses.walk_active (62.55%)\n 306,240,104 itlb_misses.walk_completed_4k (50.03%)\n 1,753,560 itlb_misses.walk_completed_2m_4m (50.00%)\n 2,189 itlb_misses.walk_completed_1g (49.96%)\n\n 9.374687885 seconds time elapsed\n\n\n\nwith MADV_COLLAPSE:\n\nnot pipelined:\ntps = 1112040.859643 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,569,546,236,696 cycles (62.50%)\n 7,094,291 itlb.itlb_flush (62.51%)\n 1,599,845,097 iTLB-loads (62.51%)\n 692,042,864 iTLB-load-misses # 43.26% of all iTLB cache accesses (62.51%)\n 31,529,641,124 itlb_misses.walk_active (62.51%)\n 669,849,177 itlb_misses.walk_completed_4k (49.99%)\n 22,708,146 itlb_misses.walk_completed_2m_4m (49.99%)\n 2,752 itlb_misses.walk_completed_1g (49.99%)\n\n 13.611206182 seconds time elapsed\n\n\npipelined:\n\ntps = 162484.443469 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,092,897,514,658 cycles (62.48%)\n 942,351 itlb.itlb_flush (62.48%)\n 233,996,092 iTLB-loads (62.48%)\n 102,155,575 iTLB-load-misses # 43.66% of all iTLB cache accesses (62.49%)\n 6,419,597,286 itlb_misses.walk_active (62.52%)\n 98,758,409 itlb_misses.walk_completed_4k (50.03%)\n 3,342,332 itlb_misses.walk_completed_2m_4m (50.02%)\n 2,190 itlb_misses.walk_completed_1g (49.98%)\n\n 9.355239897 seconds time elapsed\n\nThe difference in itlb.itlb_flush between pipelined / non-pipelined cases\nunsurprisingly is stark.\n\nWhile the pipelined case still sees a good bit reduced itlb traffic, the total\namount of cycles in which a walk is active is just not large enough to matter,\nby the looks of it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 11:33:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-03 10:21:23 -0700, Andres Freund wrote:\n> > - Add a \"cold\" __asm__ filler function that just takes up space, enough to\n> > push the end of the .text segment over the next aligned boundary, or to\n> > ~8MB in size.\n>\n> I don't understand why this is needed - as long as the pages are aligned to\n> 2MB, why do we need to fill things up on disk? The in-memory contents are the\n> relevant bit, no?\n\nI now assume it's because you either observed the mappings set up by the\nloader to not include the space between the segments?\n\nWith sufficient linker flags the segments are sufficiently aligned both on\ndisk and in memory to just map more:\n\nbfd: -Wl,-zmax-page-size=0x200000,-zcommon-page-size=0x200000\n Type Offset VirtAddr PhysAddr\n FileSiz MemSiz Flags Align\n...\n LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000\n 0x00000000000c7f58 0x00000000000c7f58 R 0x200000\n LOAD 0x0000000000200000 0x0000000000200000 0x0000000000200000\n 0x0000000000921d39 0x0000000000921d39 R E 0x200000\n LOAD 0x0000000000c00000 0x0000000000c00000 0x0000000000c00000\n 0x00000000002626b8 0x00000000002626b8 R 0x200000\n LOAD 0x0000000000fdf510 0x00000000011df510 0x00000000011df510\n 0x0000000000037fd6 0x000000000006a310 RW 0x200000\n\ngold -Wl,-zmax-page-size=0x200000,-zcommon-page-size=0x200000,--rosegment\n Type Offset VirtAddr PhysAddr\n FileSiz MemSiz Flags Align\n...\n LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000\n 0x00000000009230f9 0x00000000009230f9 R E 0x200000\n LOAD 0x0000000000a00000 0x0000000000a00000 0x0000000000a00000\n 0x000000000033a738 0x000000000033a738 R 0x200000\n LOAD 0x0000000000ddf4e0 0x0000000000fdf4e0 0x0000000000fdf4e0\n 0x000000000003800a 0x000000000006a340 RW 0x200000\n\nlld: -Wl,-zmax-page-size=0x200000,-zseparate-loadable-segments\n LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000\n 0x000000000033710c 0x000000000033710c R 0x200000\n LOAD 0x0000000000400000 0x0000000000400000 0x0000000000400000\n 0x0000000000921cb0 0x0000000000921cb0 R E 0x200000\n LOAD 0x0000000000e00000 0x0000000000e00000 0x0000000000e00000\n 0x0000000000020ae0 0x0000000000020ae0 RW 0x200000\n LOAD 0x0000000001000000 0x0000000001000000 0x0000000001000000\n 0x00000000000174ea 0x0000000000049820 RW 0x200000\n\nmold -Wl,-zmax-page-size=0x200000,-zcommon-page-size=0x200000,-zseparate-loadable-segments\n Type Offset VirtAddr PhysAddr\n FileSiz MemSiz Flags Align\n...\n LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000\n 0x000000000032dde9 0x000000000032dde9 R 0x200000\n LOAD 0x0000000000400000 0x0000000000400000 0x0000000000400000\n 0x0000000000921cbe 0x0000000000921cbe R E 0x200000\n LOAD 0x0000000000e00000 0x0000000000e00000 0x0000000000e00000\n 0x00000000002174e8 0x0000000000249820 RW 0x200000\n\nWith these flags the \"R E\" segments all start on a 0x200000/2MiB boundary and\nare padded to the next 2MiB boundary. However the OS / dynamic loader only\nmaps the necessary part, not all the zero padding.\n\nThis means that if we were to issue a MADV_COLLAPSE, we can before it do an\nmremap() to increase the length of the mapping.\n\n\nMADV_COLLAPSE without mremap:\n\ntps = 1117335.766756 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,169,012,466,070 cycles (55.53%)\n 729,146,640,019 instructions # 0.62 insn per cycle (66.65%)\n 7,062,923 itlb.itlb_flush (66.65%)\n 1,041,825,587 iTLB-loads (66.65%)\n 634,272,420 iTLB-load-misses # 60.88% of all iTLB cache accesses (66.66%)\n 27,018,254,873 itlb_misses.walk_active (66.68%)\n 610,639,252 itlb_misses.walk_completed_4k (44.47%)\n 24,262,549 itlb_misses.walk_completed_2m_4m (44.46%)\n 2,948 itlb_misses.walk_completed_1g (44.43%)\n\n 10.039217004 seconds time elapsed\n\n\nMADV_COLLAPSE with mremap:\n\ntps = 1140869.853616 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,173,272,878,934 cycles (55.53%)\n 746,008,850,147 instructions # 0.64 insn per cycle (66.65%)\n 7,538,962 itlb.itlb_flush (66.65%)\n 799,861,088 iTLB-loads (66.65%)\n 254,347,048 iTLB-load-misses # 31.80% of all iTLB cache accesses (66.66%)\n 14,427,296,885 itlb_misses.walk_active (66.69%)\n 221,811,835 itlb_misses.walk_completed_4k (44.47%)\n 32,881,405 itlb_misses.walk_completed_2m_4m (44.46%)\n 3,043 itlb_misses.walk_completed_1g (44.43%)\n\n 10.038517778 seconds time elapsed\n\n\ncompared to a run without any huge pages (via THP or MADV_COLLAPSE):\n\ntps = 1034960.102843 (without initial connection time)\n\n Performance counter stats for 'system wide':\n\n 1,183,743,785,066 cycles (55.54%)\n 678,525,810,443 instructions # 0.57 insn per cycle (66.65%)\n 7,163,304 itlb.itlb_flush (66.65%)\n 2,952,660,798 iTLB-loads (66.65%)\n 2,105,431,590 iTLB-load-misses # 71.31% of all iTLB cache accesses (66.66%)\n 80,593,535,910 itlb_misses.walk_active (66.68%)\n 2,105,377,810 itlb_misses.walk_completed_4k (44.46%)\n 1,254,156 itlb_misses.walk_completed_2m_4m (44.46%)\n 3,366 itlb_misses.walk_completed_1g (44.44%)\n\n 10.039821650 seconds time elapsed\n\n\nSo a 7.96% win from no-huge-pages to MADV_COLLAPSE and a further 2.11% win\nfrom there to also using mremap(), yielding a total of 10.23%. It's similar\nacross runs.\n\n\nOn my system the other libraries unfortunately aren't aligned properly. It'd\nbe nice to also remap at least libc. The majority of the remaining misses are\nfrom the vdso (too small for a huge page), libc (not aligned properly),\nreturning from system calls (which flush the itlb) and pgbench / libpq (I\ndidn't add the mremap there, there's not enough code for a huge page without\nit).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 14:21:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "On Sat, Nov 5, 2022 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > I wonder how far we can get with just using the linker hints to align\n> > sections. I know that the linux folks are working on promoting\nsufficiently\n> > aligned executable pages to huge pages too, and might have succeeded\nalready.\n> >\n> > IOW, adding the linker flags might be a good first step.\n>\n> Indeed, I did see that that works to some degree on the 5.19 kernel I was\n> running. However, it never seems to get around to using huge pages\n> sufficiently to compete with explicit use of huge pages.\n\nOh nice, I didn't know that! There might be some threshold of pages mapped\nbefore it does so. At least, that issue is mentioned in that paper linked\nupthread for FreeBSD.\n\n> More interestingly, a few days ago, a new madvise hint, MADV_COLLAPSE, was\n> added into linux 6.1. That explicitly remaps a region and uses huge pages\nfor\n> it. Of course that's going to take a while to be widely available, but it\n> seems like a safer approach than the remapping approach from this thread.\n\nI didn't know that either, funny timing.\n\n> I hacked in a MADV_COLLAPSE (with setarch -R, so that I could just\nhardcode\n> the address / length), and it seems to work nicely.\n>\n> With the weird caveat that on fs one needs to make sure that the\nexecutable\n> doesn't reflinks to reuse parts of other files, and that the mold linker\nand\n> cp do... Not a concern on ext4, but on xfs. I took to copying the postgres\n> binary with cp --reflink=never\n\nWhat happens otherwise? That sounds like a difficult thing to guard against.\n\n> The difference in itlb.itlb_flush between pipelined / non-pipelined cases\n> unsurprisingly is stark.\n>\n> While the pipelined case still sees a good bit reduced itlb traffic, the\ntotal\n> amount of cycles in which a walk is active is just not large enough to\nmatter,\n> by the looks of it.\n\nGood to know, thanks for testing. Maybe the pipelined case is something\ndevs should consider when microbenchmarking, to reduce noise from context\nswitches.\n\nOn Sat, Nov 5, 2022 at 4:21 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-03 10:21:23 -0700, Andres Freund wrote:\n> > > - Add a \"cold\" __asm__ filler function that just takes up space,\nenough to\n> > > push the end of the .text segment over the next aligned boundary, or\nto\n> > > ~8MB in size.\n> >\n> > I don't understand why this is needed - as long as the pages are\naligned to\n> > 2MB, why do we need to fill things up on disk? The in-memory contents\nare the\n> > relevant bit, no?\n>\n> I now assume it's because you either observed the mappings set up by the\n> loader to not include the space between the segments?\n\nMy knowledge is not quite that deep. The iodlr repo has an example \"hello\nworld\" program, which links with 8 filler objects, each with 32768\n__attribute((used)) dummy functions. I just cargo-culted that idea and\nsimplified it. Interestingly enough, looking through the commit history,\nthey used to align the segments via linker flags, but took it out here:\n\nhttps://github.com/intel/iodlr/pull/25#discussion_r397787559\n\n...saying \"I'm not sure why we added this\". :/\n\nI quickly tried to align the segments with the linker and then in my patch\nhave the address for mmap() rounded *down* from the .text start to the\nbeginning of that segment. It refused to start without logging an error.\n\nBTW, that what I meant before, although I wasn't clear:\n\n> > Since the front is all-cold, and there is very little at the end,\n> > practically all hot pages are now remapped. The biggest problem with the\n> > hackish filler function (in addition to maintainability) is, if explicit\n> > huge pages are turned off in the kernel, attempting mmap() with\nMAP_HUGETLB\n> > causes complete startup failure if the .text segment is larger than 8MB.\n>\n> I would expect MAP_HUGETLB to always fail if not enabled in the kernel,\n> independent of the .text segment size?\n\nWith the file-level hack, it would just fail without a trace with .text >\n8MB (I have yet to enable core dumps on this new OS I have...), whereas\nwithout it I did see the failures in the log, and successful fallback.\n\n> With these flags the \"R E\" segments all start on a 0x200000/2MiB boundary\nand\n> are padded to the next 2MiB boundary. However the OS / dynamic loader only\n> maps the necessary part, not all the zero padding.\n>\n> This means that if we were to issue a MADV_COLLAPSE, we can before it do\nan\n> mremap() to increase the length of the mapping.\n\nI see, interesting. What location are you passing for madvise() and\nmremap()? The beginning of the segment (for me has .init/.plt) or an\naligned boundary within .text?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Nov 5, 2022 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:> > I wonder how far we can get with just using the linker hints to align> > sections. I know that the linux folks are working on promoting sufficiently> > aligned executable pages to huge pages too, and might have succeeded already.> >> > IOW, adding the linker flags might be a good first step.>> Indeed, I did see that that works to some degree on the 5.19 kernel I was> running. However, it never seems to get around to using huge pages> sufficiently to compete with explicit use of huge pages.Oh nice, I didn't know that! There might be some threshold of pages mapped before it does so. At least, that issue is mentioned in that paper linked upthread for FreeBSD.> More interestingly, a few days ago, a new madvise hint, MADV_COLLAPSE, was> added into linux 6.1. That explicitly remaps a region and uses huge pages for> it. Of course that's going to take a while to be widely available, but it> seems like a safer approach than the remapping approach from this thread.I didn't know that either, funny timing.> I hacked in a MADV_COLLAPSE (with setarch -R, so that I could just hardcode> the address / length), and it seems to work nicely.>> With the weird caveat that on fs one needs to make sure that the executable> doesn't reflinks to reuse parts of other files, and that the mold linker and> cp do... Not a concern on ext4, but on xfs. I took to copying the postgres> binary with cp --reflink=neverWhat happens otherwise? That sounds like a difficult thing to guard against.> The difference in itlb.itlb_flush between pipelined / non-pipelined cases> unsurprisingly is stark.>> While the pipelined case still sees a good bit reduced itlb traffic, the total> amount of cycles in which a walk is active is just not large enough to matter,> by the looks of it.Good to know, thanks for testing. Maybe the pipelined case is something devs should consider when microbenchmarking, to reduce noise from context switches.On Sat, Nov 5, 2022 at 4:21 AM Andres Freund <andres@anarazel.de> wrote:>> Hi,>> On 2022-11-03 10:21:23 -0700, Andres Freund wrote:> > > - Add a \"cold\" __asm__ filler function that just takes up space, enough to> > > push the end of the .text segment over the next aligned boundary, or to> > > ~8MB in size.> >> > I don't understand why this is needed - as long as the pages are aligned to> > 2MB, why do we need to fill things up on disk? The in-memory contents are the> > relevant bit, no?>> I now assume it's because you either observed the mappings set up by the> loader to not include the space between the segments?My knowledge is not quite that deep. The iodlr repo has an example \"hello world\" program, which links with 8 filler objects, each with 32768 __attribute((used)) dummy functions. I just cargo-culted that idea and simplified it. Interestingly enough, looking through the commit history, they used to align the segments via linker flags, but took it out here:https://github.com/intel/iodlr/pull/25#discussion_r397787559...saying \"I'm not sure why we added this\". :/I quickly tried to align the segments with the linker and then in my patch have the address for mmap() rounded *down* from the .text start to the beginning of that segment. It refused to start without logging an error. BTW, that what I meant before, although I wasn't clear:> > Since the front is all-cold, and there is very little at the end,> > practically all hot pages are now remapped. The biggest problem with the> > hackish filler function (in addition to maintainability) is, if explicit> > huge pages are turned off in the kernel, attempting mmap() with MAP_HUGETLB> > causes complete startup failure if the .text segment is larger than 8MB.>> I would expect MAP_HUGETLB to always fail if not enabled in the kernel,> independent of the .text segment size?With the file-level hack, it would just fail without a trace with .text > 8MB (I have yet to enable core dumps on this new OS I have...), whereas without it I did see the failures in the log, and successful fallback.> With these flags the \"R E\" segments all start on a 0x200000/2MiB boundary and> are padded to the next 2MiB boundary. However the OS / dynamic loader only> maps the necessary part, not all the zero padding.>> This means that if we were to issue a MADV_COLLAPSE, we can before it do an> mremap() to increase the length of the mapping.I see, interesting. What location are you passing for madvise() and mremap()? The beginning of the segment (for me has .init/.plt) or an aligned boundary within .text?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 5 Nov 2022 12:54:18 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-05 12:54:18 +0700, John Naylor wrote:\n> On Sat, Nov 5, 2022 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:\n> > I hacked in a MADV_COLLAPSE (with setarch -R, so that I could just\n> hardcode\n> > the address / length), and it seems to work nicely.\n> >\n> > With the weird caveat that on fs one needs to make sure that the\n> executable\n> > doesn't reflinks to reuse parts of other files, and that the mold linker\n> and\n> > cp do... Not a concern on ext4, but on xfs. I took to copying the postgres\n> > binary with cp --reflink=never\n>\n> What happens otherwise? That sounds like a difficult thing to guard against.\n\nMADV_COLLAPSE fails, but otherwise things continue on. I think it's mostly an\nissue on dev systems, not on prod systems, because there the files will be be\nunpacked from a package or such.\n\n\n> > On 2022-11-03 10:21:23 -0700, Andres Freund wrote:\n> > > > - Add a \"cold\" __asm__ filler function that just takes up space,\n> enough to\n> > > > push the end of the .text segment over the next aligned boundary, or\n> to\n> > > > ~8MB in size.\n> > >\n> > > I don't understand why this is needed - as long as the pages are\n> aligned to\n> > > 2MB, why do we need to fill things up on disk? The in-memory contents\n> are the\n> > > relevant bit, no?\n> >\n> > I now assume it's because you either observed the mappings set up by the\n> > loader to not include the space between the segments?\n>\n> My knowledge is not quite that deep. The iodlr repo has an example \"hello\n> world\" program, which links with 8 filler objects, each with 32768\n> __attribute((used)) dummy functions. I just cargo-culted that idea and\n> simplified it. Interestingly enough, looking through the commit history,\n> they used to align the segments via linker flags, but took it out here:\n>\n> https://github.com/intel/iodlr/pull/25#discussion_r397787559\n>\n> ...saying \"I'm not sure why we added this\". :/\n\nThat was about using a linker script, not really linker flags though.\n\nI don't think the dummy functions are a good approach, there were plenty\nthings after it when I played with them.\n\n\n\n> I quickly tried to align the segments with the linker and then in my patch\n> have the address for mmap() rounded *down* from the .text start to the\n> beginning of that segment. It refused to start without logging an error.\n\nHm, what linker was that? I did note that you need some additional flags for\nsome of the linkers.\n\n\n> > With these flags the \"R E\" segments all start on a 0x200000/2MiB boundary\n> and\n> > are padded to the next 2MiB boundary. However the OS / dynamic loader only\n> > maps the necessary part, not all the zero padding.\n> >\n> > This means that if we were to issue a MADV_COLLAPSE, we can before it do\n> an\n> > mremap() to increase the length of the mapping.\n>\n> I see, interesting. What location are you passing for madvise() and\n> mremap()? The beginning of the segment (for me has .init/.plt) or an\n> aligned boundary within .text?\n\nI started postgres with setarch -R, looked at /proc/$pid/[s]maps to see the\nstart/end of the r-xp mapped segment. Here's my hacky code, with a bunch of\ncomments added.\n\n void *addr = (void*) 0x555555800000;\n void *end = (void *) 0x555555e09000;\n size_t advlen = (uintptr_t) end - (uintptr_t) addr;\n\n const size_t bound = 1024*1024*2 - 1;\n size_t advlen_up = (advlen + bound - 1) & ~(bound - 1);\n void *r2;\n\n /*\n * Increase size of mapping to cover the tailing padding to the next\n * segment. Otherwise all the code in that range can't be put into\n * a huge page (access in the non-mapped range needs to cause a fault,\n * hence can't be in the huge page).\n * XXX: Should proably assert that that space is actually zeroes.\n */\n r2 = mremap(addr, advlen, advlen_up, 0);\n if (r2 == MAP_FAILED)\n fprintf(stderr, \"mremap failed: %m\\n\");\n else if (r2 != addr)\n fprintf(stderr, \"mremap wrong addr: %m\\n\");\n else\n advlen = advlen_up;\n\n /*\n * The docs for MADV_COLLAPSE say there should be at least one page\n * in the mapped space \"for every eligible hugepage-aligned/sized\n * region to be collapsed\". I just forced that. But probably not\n * necessary.\n */\n r = madvise(addr, advlen, MADV_WILLNEED);\n if (r != 0)\n fprintf(stderr, \"MADV_WILLNEED failed: %m\\n\");\n\n r = madvise(addr, advlen, MADV_POPULATE_READ);\n if (r != 0)\n fprintf(stderr, \"MADV_POPULATE_READ failed: %m\\n\");\n\n /*\n * Make huge pages out of it. Requires at least linux 6.1. We could\n * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all that\n * much in older kernels.\n */\n#define MADV_COLLAPSE 25\n r = madvise(addr, advlen, MADV_COLLAPSE);\n if (r != 0)\n fprintf(stderr, \"MADV_COLLAPSE failed: %m\\n\");\n\n\nA real version would have to open /proc/self/maps and do this for at least\npostgres' r-xp mapping. We could do it for libraries too, if they're suitably\naligned (both in memory and on-disk).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Nov 2022 01:27:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "On Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > simplified it. Interestingly enough, looking through the commit history,\n> > they used to align the segments via linker flags, but took it out here:\n> >\n> > https://github.com/intel/iodlr/pull/25#discussion_r397787559\n> >\n> > ...saying \"I'm not sure why we added this\". :/\n>\n> That was about using a linker script, not really linker flags though.\n\nOops, the commit I was referring to pointed to that discussion, but I\nshould have shown it instead:\n\n--- a/large_page-c/example/Makefile\n+++ b/large_page-c/example/Makefile\n@@ -28,7 +28,6 @@ OBJFILES= \\\n filler16.o \\\n\n OBJS=$(addprefix $(OBJDIR)/,$(OBJFILES))\n-LDFLAGS=-Wl,-z,max-page-size=2097152\n\nBut from what you're saying, this flag wouldn't have been enough anyway...\n\n> I don't think the dummy functions are a good approach, there were plenty\n> things after it when I played with them.\n\nTo be technical, the point wasn't to have no code after it, but to have no\n*hot* code *before* it, since with the iodlr approach the first 1.99MB of\n.text is below the first aligned boundary within that section. But yeah,\nI'm happy to ditch that hack entirely.\n\n> > > With these flags the \"R E\" segments all start on a 0x200000/2MiB\nboundary\n> > and\n> > > are padded to the next 2MiB boundary. However the OS / dynamic loader\nonly\n> > > maps the necessary part, not all the zero padding.\n> > >\n> > > This means that if we were to issue a MADV_COLLAPSE, we can before it\ndo\n> > an\n> > > mremap() to increase the length of the mapping.\n> >\n> > I see, interesting. What location are you passing for madvise() and\n> > mremap()? The beginning of the segment (for me has .init/.plt) or an\n> > aligned boundary within .text?\n\n> /*\n> * Make huge pages out of it. Requires at least linux 6.1. We\ncould\n> * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all\nthat\n> * much in older kernels.\n> */\n\nAbout madvise(), I take it MADV_HUGEPAGE and MADV_COLLAPSE only work for\nTHP? The man page seems to indicate that.\n\nIn the support work I've done, the standard recommendation is to turn THP\noff, especially if they report sudden performance problems. If explicit\nHP's are used for shared mem, maybe THP is less of a risk? I need to look\nback at the tests that led to that advice...\n\n> A real version would have to open /proc/self/maps and do this for at least\n\nI can try and generalize your above sketch into a v2 patch.\n\n> postgres' r-xp mapping. We could do it for libraries too, if they're\nsuitably\n> aligned (both in memory and on-disk).\n\nIt looks like plpgsql is only 27 standard pages in size...\n\nRegarding glibc, we could try moving a couple of the hotter functions into\nPG, using smaller and simpler coding, if that has better frontend cache\nbehavior. The paper \"Understanding and Mitigating Front-End Stalls in\nWarehouse-Scale Computers\" talks about this, particularly section 4.4\nregarding memcmp().\n\n> > I quickly tried to align the segments with the linker and then in my\npatch\n> > have the address for mmap() rounded *down* from the .text start to the\n> > beginning of that segment. It refused to start without logging an error.\n>\n> Hm, what linker was that? I did note that you need some additional flags\nfor\n> some of the linkers.\n\nBFD, but I wouldn't worry about that failure too much, since the\nmremap()/madvise() strategy has a lot fewer moving parts.\n\nOn the subject of linkers, though, one thing that tripped me up was trying\nto change the linker with Meson. First I tried\n\n-Dc_args='-fuse-ld=lld'\n\nbut that led to warnings like this when :\n/usr/bin/ld: warning: -z separate-loadable-segments ignored\n\nWhen using this in the top level meson.build\n\nelif host_system == 'linux'\n sema_kind = 'unnamed_posix'\n cppflags += '-D_GNU_SOURCE'\n # Align the loadable segments to 2MB boundaries to support remapping to\n # huge pages.\n ldflags += cc.get_supported_link_arguments([\n '-Wl,-zmax-page-size=0x200000',\n '-Wl,-zcommon-page-size=0x200000',\n '-Wl,-zseparate-loadable-segments'\n ])\n\n\nAccording to\n\nhttps://mesonbuild.com/howtox.html#set-linker\n\nI need to add CC_LD=lld to the env vars before invoking, which got rid of\nthe warning. Then I wanted to verify that lld was actually used, and in\n\nhttps://releases.llvm.org/14.0.0/tools/lld/docs/index.html\n\nit says I can run this and it should show “Linker: LLD”, but that doesn't\nappear for me:\n\n$ readelf --string-dump .comment inst-perf/bin/postgres\n\nString dump of section '.comment':\n [ 0] GCC: (GNU) 12.2.1 20220819 (Red Hat 12.2.1-2)\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:> > simplified it. Interestingly enough, looking through the commit history,> > they used to align the segments via linker flags, but took it out here:> >> > https://github.com/intel/iodlr/pull/25#discussion_r397787559> >> > ...saying \"I'm not sure why we added this\". :/>> That was about using a linker script, not really linker flags though.Oops, the commit I was referring to pointed to that discussion, but I should have shown it instead:--- a/large_page-c/example/Makefile+++ b/large_page-c/example/Makefile@@ -28,7 +28,6 @@ OBJFILES= \\ filler16.o \\ OBJS=$(addprefix $(OBJDIR)/,$(OBJFILES))-LDFLAGS=-Wl,-z,max-page-size=2097152But from what you're saying, this flag wouldn't have been enough anyway...> I don't think the dummy functions are a good approach, there were plenty> things after it when I played with them.To be technical, the point wasn't to have no code after it, but to have no *hot* code *before* it, since with the iodlr approach the first 1.99MB of .text is below the first aligned boundary within that section. But yeah, I'm happy to ditch that hack entirely.> > > With these flags the \"R E\" segments all start on a 0x200000/2MiB boundary> > and> > > are padded to the next 2MiB boundary. However the OS / dynamic loader only> > > maps the necessary part, not all the zero padding.> > >> > > This means that if we were to issue a MADV_COLLAPSE, we can before it do> > an> > > mremap() to increase the length of the mapping.> >> > I see, interesting. What location are you passing for madvise() and> > mremap()? The beginning of the segment (for me has .init/.plt) or an> > aligned boundary within .text?> /*> * Make huge pages out of it. Requires at least linux 6.1. We could> * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all that> * much in older kernels.> */About madvise(), I take it MADV_HUGEPAGE and MADV_COLLAPSE only work for THP? The man page seems to indicate that.In the support work I've done, the standard recommendation is to turn THP off, especially if they report sudden performance problems. If explicit HP's are used for shared mem, maybe THP is less of a risk? I need to look back at the tests that led to that advice...> A real version would have to open /proc/self/maps and do this for at leastI can try and generalize your above sketch into a v2 patch.> postgres' r-xp mapping. We could do it for libraries too, if they're suitably> aligned (both in memory and on-disk).It looks like plpgsql is only 27 standard pages in size...Regarding glibc, we could try moving a couple of the hotter functions into PG, using smaller and simpler coding, if that has better frontend cache behavior. The paper \"Understanding and Mitigating Front-End Stalls in Warehouse-Scale Computers\" talks about this, particularly section 4.4 regarding memcmp().> > I quickly tried to align the segments with the linker and then in my patch> > have the address for mmap() rounded *down* from the .text start to the> > beginning of that segment. It refused to start without logging an error.>> Hm, what linker was that? I did note that you need some additional flags for> some of the linkers.BFD, but I wouldn't worry about that failure too much, since the mremap()/madvise() strategy has a lot fewer moving parts.On the subject of linkers, though, one thing that tripped me up was trying to change the linker with Meson. First I tried-Dc_args='-fuse-ld=lld'but that led to warnings like this when :/usr/bin/ld: warning: -z separate-loadable-segments ignoredWhen using this in the top level meson.buildelif host_system == 'linux' sema_kind = 'unnamed_posix' cppflags += '-D_GNU_SOURCE' # Align the loadable segments to 2MB boundaries to support remapping to # huge pages. ldflags += cc.get_supported_link_arguments([ '-Wl,-zmax-page-size=0x200000', '-Wl,-zcommon-page-size=0x200000', '-Wl,-zseparate-loadable-segments' ])According to https://mesonbuild.com/howtox.html#set-linkerI need to add CC_LD=lld to the env vars before invoking, which got rid of the warning. Then I wanted to verify that lld was actually used, and inhttps://releases.llvm.org/14.0.0/tools/lld/docs/index.htmlit says I can run this and it should show “Linker: LLD”, but that doesn't appear for me:$ readelf --string-dump .comment inst-perf/bin/postgres String dump of section '.comment': [ 0] GCC: (GNU) 12.2.1 20220819 (Red Hat 12.2.1-2)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sun, 6 Nov 2022 13:56:10 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-06 13:56:10 +0700, John Naylor wrote:\n> On Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't think the dummy functions are a good approach, there were plenty\n> > things after it when I played with them.\n>\n> To be technical, the point wasn't to have no code after it, but to have no\n> *hot* code *before* it, since with the iodlr approach the first 1.99MB of\n> .text is below the first aligned boundary within that section. But yeah,\n> I'm happy to ditch that hack entirely.\n\nJust because code is colder than the alternative branch, doesn't necessary\nmean it's entirely cold overall. I saw hits to things after the dummy function\nto have a perf effect.\n\n\n> > > > With these flags the \"R E\" segments all start on a 0x200000/2MiB\n> boundary\n> > > and\n> > > > are padded to the next 2MiB boundary. However the OS / dynamic loader\n> only\n> > > > maps the necessary part, not all the zero padding.\n> > > >\n> > > > This means that if we were to issue a MADV_COLLAPSE, we can before it\n> do\n> > > an\n> > > > mremap() to increase the length of the mapping.\n> > >\n> > > I see, interesting. What location are you passing for madvise() and\n> > > mremap()? The beginning of the segment (for me has .init/.plt) or an\n> > > aligned boundary within .text?\n>\n> > /*\n> > * Make huge pages out of it. Requires at least linux 6.1. We\n> could\n> > * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all\n> that\n> > * much in older kernels.\n> > */\n>\n> About madvise(), I take it MADV_HUGEPAGE and MADV_COLLAPSE only work for\n> THP? The man page seems to indicate that.\n\nMADV_HUGEPAGE works as long as /sys/kernel/mm/transparent_hugepage/enabled is\nto always or madvise. My understanding is that MADV_COLLAPSE will work even\nif /sys/kernel/mm/transparent_hugepage/enabled is set to never.\n\n\n> In the support work I've done, the standard recommendation is to turn THP\n> off, especially if they report sudden performance problems.\n\nI think that's pretty much an outdated suggestion FWIW. Largely caused by Red\nHat extremely aggressively backpatching transparent hugepages into RHEL 6\n(IIRC). Lots of improvements have been made to THP since then. I've tried to\nsee negative effects maybe 2-3 years back, without success.\n\nI really don't see a reason to ever set\n/sys/kernel/mm/transparent_hugepage/enabled to 'never', rather than just 'madvise'.\n\n\n> If explicit HP's are used for shared mem, maybe THP is less of a risk? I\n> need to look back at the tests that led to that advice...\n\nI wouldn't give that advice to customers anymore, unless they use extremely\nold platforms or unless there's very concrete evidence.\n\n\n> > A real version would have to open /proc/self/maps and do this for at least\n>\n> I can try and generalize your above sketch into a v2 patch.\n\nCool.\n\n\n> > postgres' r-xp mapping. We could do it for libraries too, if they're\n> suitably\n> > aligned (both in memory and on-disk).\n>\n> It looks like plpgsql is only 27 standard pages in size...\n>\n> Regarding glibc, we could try moving a couple of the hotter functions into\n> PG, using smaller and simpler coding, if that has better frontend cache\n> behavior. The paper \"Understanding and Mitigating Front-End Stalls in\n> Warehouse-Scale Computers\" talks about this, particularly section 4.4\n> regarding memcmp().\n\nI think the amount of work necessary for that is nontrivial and continual. So\nI'm loathe to go there.\n\n\n> > > I quickly tried to align the segments with the linker and then in my\n> patch\n> > > have the address for mmap() rounded *down* from the .text start to the\n> > > beginning of that segment. It refused to start without logging an error.\n> >\n> > Hm, what linker was that? I did note that you need some additional flags\n> for\n> > some of the linkers.\n>\n> BFD, but I wouldn't worry about that failure too much, since the\n> mremap()/madvise() strategy has a lot fewer moving parts.\n>\n> On the subject of linkers, though, one thing that tripped me up was trying\n> to change the linker with Meson. First I tried\n>\n> -Dc_args='-fuse-ld=lld'\n\nIt's -Dc_link_args=...\n\n\n> but that led to warnings like this when :\n> /usr/bin/ld: warning: -z separate-loadable-segments ignored\n>\n> When using this in the top level meson.build\n>\n> elif host_system == 'linux'\n> sema_kind = 'unnamed_posix'\n> cppflags += '-D_GNU_SOURCE'\n> # Align the loadable segments to 2MB boundaries to support remapping to\n> # huge pages.\n> ldflags += cc.get_supported_link_arguments([\n> '-Wl,-zmax-page-size=0x200000',\n> '-Wl,-zcommon-page-size=0x200000',\n> '-Wl,-zseparate-loadable-segments'\n> ])\n>\n>\n> According to\n>\n> https://mesonbuild.com/howtox.html#set-linker\n>\n> I need to add CC_LD=lld to the env vars before invoking, which got rid of\n> the warning. Then I wanted to verify that lld was actually used, and in\n>\n> https://releases.llvm.org/14.0.0/tools/lld/docs/index.html\n\nYou can just look at build.ninja, fwiw. Or use ninja -v (in postgres's cases\nwith -d keeprsp, because the commandline ends up being long enough for a\nresponse file being used).\n\n\n> it says I can run this and it should show “Linker: LLD”, but that doesn't\n> appear for me:\n>\n> $ readelf --string-dump .comment inst-perf/bin/postgres\n>\n> String dump of section '.comment':\n> [ 0] GCC: (GNU) 12.2.1 20220819 (Red Hat 12.2.1-2)\n\nThat's added by the compiler, not the linker. See e.g.:\n\n$ readelf --string-dump .comment src/backend/postgres_lib.a.p/storage_ipc_procarray.c.o\n\nString dump of section '.comment':\n [ 1] GCC: (Debian 12.2.0-9) 12.2.0\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 6 Nov 2022 10:16:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "On Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n\n> /*\n> * Make huge pages out of it. Requires at least linux 6.1. We\ncould\n> * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all\nthat\n> * much in older kernels.\n> */\n> #define MADV_COLLAPSE 25\n> r = madvise(addr, advlen, MADV_COLLAPSE);\n> if (r != 0)\n> fprintf(stderr, \"MADV_COLLAPSE failed: %m\\n\");\n>\n>\n> A real version would have to open /proc/self/maps and do this for at least\n> postgres' r-xp mapping. We could do it for libraries too, if they're\nsuitably\n> aligned (both in memory and on-disk).\n\nHi Andres, my kernel has been new enough for a while now, and since TLBs\nand context switches came up in the thread on... threads, I'm swapping this\nback in my head.\n\nFor the postmaster, it should be simple to have a function that just takes\nthe address of itself, then parses /proc/self/maps to find the boundaries\nwithin which it lies. I haven't thought about libraries much. Though with\njust the postmaster it seems that would give us the biggest bang for the\nbuck?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:> /*> * Make huge pages out of it. Requires at least linux 6.1. We could> * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all that> * much in older kernels.> */> #define MADV_COLLAPSE 25> r = madvise(addr, advlen, MADV_COLLAPSE);> if (r != 0)> fprintf(stderr, \"MADV_COLLAPSE failed: %m\\n\");>>> A real version would have to open /proc/self/maps and do this for at least> postgres' r-xp mapping. We could do it for libraries too, if they're suitably> aligned (both in memory and on-disk).Hi Andres, my kernel has been new enough for a while now, and since TLBs and context switches came up in the thread on... threads, I'm swapping this back in my head.For the postmaster, it should be simple to have a function that just takes the address of itself, then parses /proc/self/maps to find the boundaries within which it lies. I haven't thought about libraries much. Though with just the postmaster it seems that would give us the biggest bang for the buck?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 12:40:18 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-14 12:40:18 +0700, John Naylor wrote:\n> On Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > /*\n> > * Make huge pages out of it. Requires at least linux 6.1. We\n> could\n> > * fall back to MADV_HUGEPAGE if it fails, but it doesn't do all\n> that\n> > * much in older kernels.\n> > */\n> > #define MADV_COLLAPSE 25\n> > r = madvise(addr, advlen, MADV_COLLAPSE);\n> > if (r != 0)\n> > fprintf(stderr, \"MADV_COLLAPSE failed: %m\\n\");\n> >\n> >\n> > A real version would have to open /proc/self/maps and do this for at least\n> > postgres' r-xp mapping. We could do it for libraries too, if they're\n> suitably\n> > aligned (both in memory and on-disk).\n> \n> Hi Andres, my kernel has been new enough for a while now, and since TLBs\n> and context switches came up in the thread on... threads, I'm swapping this\n> back in my head.\n\nCool - I think we have some real potential for substantial wins around this.\n\n\n> For the postmaster, it should be simple to have a function that just takes\n> the address of itself, then parses /proc/self/maps to find the boundaries\n> within which it lies. I haven't thought about libraries much. Though with\n> just the postmaster it seems that would give us the biggest bang for the\n> buck?\n\nI think that is the main bit, yes. We could just try to do this for the\nlibraries, but accept failure to do so?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Jun 2023 10:05:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 12:40 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Sat, Nov 5, 2022 at 3:27 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > A real version would have to open /proc/self/maps and do this for at\nleast\n> > postgres' r-xp mapping. We could do it for libraries too, if they're\nsuitably\n> > aligned (both in memory and on-disk).\n\n> For the postmaster, it should be simple to have a function that just\ntakes the address of itself, then parses /proc/self/maps to find the\nboundaries within which it lies. I haven't thought about libraries much.\nThough with just the postmaster it seems that would give us the biggest\nbang for the buck?\n\nHere's a start at that, trying with postmaster only. Unfortunately, I get\n\"MADV_COLLAPSE failed: Invalid argument\". I tried different addresses with\nno luck, and also got the same result with a small standalone program. I'm\non ext4, so I gather I don't need \"cp --reflink=never\" but tried it anyway.\nConfiguration looks normal by \"grep HUGEPAGE /boot/config-$(uname\n-r)\". Maybe there's something obvious I'm missing?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 20 Jun 2023 10:23:14 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-20 10:23:14 +0700, John Naylor wrote:\n> Here's a start at that, trying with postmaster only. Unfortunately, I get\n> \"MADV_COLLAPSE failed: Invalid argument\".\n\nI also see that. But depending on the steps, I also see\n MADV_COLLAPSE failed: Resource temporarily unavailable\n\nI suspect there's some kernel issue. I'll try to ping somebody.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 10:29:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-20 10:29:41 -0700, Andres Freund wrote:\n> On 2023-06-20 10:23:14 +0700, John Naylor wrote:\n> > Here's a start at that, trying with postmaster only. Unfortunately, I get\n> > \"MADV_COLLAPSE failed: Invalid argument\".\n> \n> I also see that. But depending on the steps, I also see\n> MADV_COLLAPSE failed: Resource temporarily unavailable\n> \n> I suspect there's some kernel issue. I'll try to ping somebody.\n\nWhich kernel version are you using? It looks like the issue I am hitting might\nbe specific to the in-development 6.4 kernel.\n\nOne thing I now remember, after trying older kernels, is that it looks like\none sometimes needs to call 'sync' to ensure the page cache data for the\nexecutable is clean, before executing postgres.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 10:46:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-06-20 10:29:41 -0700, Andres Freund wrote:\n> > On 2023-06-20 10:23:14 +0700, John Naylor wrote:\n> > > Here's a start at that, trying with postmaster only. Unfortunately, I\nget\n> > > \"MADV_COLLAPSE failed: Invalid argument\".\n> >\n> > I also see that. But depending on the steps, I also see\n> > MADV_COLLAPSE failed: Resource temporarily unavailable\n> >\n> > I suspect there's some kernel issue. I'll try to ping somebody.\n>\n> Which kernel version are you using? It looks like the issue I am hitting\nmight\n> be specific to the in-development 6.4 kernel.\n\n(Fedora 38) uname -r shows\n\n6.3.7-200.fc38.x86_64\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 21, 2023 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:>> Hi,>> On 2023-06-20 10:29:41 -0700, Andres Freund wrote:> > On 2023-06-20 10:23:14 +0700, John Naylor wrote:> > > Here's a start at that, trying with postmaster only. Unfortunately, I get> > > \"MADV_COLLAPSE failed: Invalid argument\".> >> > I also see that. But depending on the steps, I also see> > MADV_COLLAPSE failed: Resource temporarily unavailable> >> > I suspect there's some kernel issue. I'll try to ping somebody.>> Which kernel version are you using? It looks like the issue I am hitting might> be specific to the in-development 6.4 kernel.(Fedora 38) uname -r shows 6.3.7-200.fc38.x86_64--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Jun 2023 09:35:36 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 09:35:36 +0700, John Naylor wrote:\n> On Wed, Jun 21, 2023 at 12:46 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-06-20 10:29:41 -0700, Andres Freund wrote:\n> > > On 2023-06-20 10:23:14 +0700, John Naylor wrote:\n> > > > Here's a start at that, trying with postmaster only. Unfortunately, I\n> get\n> > > > \"MADV_COLLAPSE failed: Invalid argument\".\n> > >\n> > > I also see that. But depending on the steps, I also see\n> > > MADV_COLLAPSE failed: Resource temporarily unavailable\n> > >\n> > > I suspect there's some kernel issue. I'll try to ping somebody.\n> >\n> > Which kernel version are you using? It looks like the issue I am hitting\n> might\n> > be specific to the in-development 6.4 kernel.\n> \n> (Fedora 38) uname -r shows\n> \n> 6.3.7-200.fc38.x86_64\n\nFWIW, I bisected the bug I was encountering.\n\nAs far as I understand, it should not affect you, it was only merged into\n6.4-rc1 and a fix is scheduled to be merged into 6.4 before its release. See\nhttps://lore.kernel.org/all/ZJIWAvTczl0rHJBv@x1n/\n\nSo I am wondering if you're encountering a different kind of problem. As I\nmentioned, I have observed that the pages need to be clean for this to\nwork. For me adding a \"sync path/to/postgres\" makes it work on 6.3.8. Without\nthe sync it starts to work a while later (presumably when the kernel got\naround to writing the data back).\n\n\nwithout sync:\n\nself: 0x563b2abf0a72 start: 563b2a800000 end: 563b2afe3000\nold advlen: 7e3000\nnew advlen: 800000\nMADV_COLLAPSE failed: Invalid argument\n\nwith sync:\nself: 0x555c947f0a72 start: 555c94400000 end: 555c94be3000\nold advlen: 7e3000\nnew advlen: 800000\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 20:41:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 10:42 AM Andres Freund <andres@anarazel.de> wrote:\n\n> So I am wondering if you're encountering a different kind of problem. As I\n> mentioned, I have observed that the pages need to be clean for this to\n> work. For me adding a \"sync path/to/postgres\" makes it work on 6.3.8.\nWithout\n> the sync it starts to work a while later (presumably when the kernel got\n> around to writing the data back).\n\nHmm, then after rebooting today, it shouldn't have that problem until a\nbuild links again, but I'll make sure to do that when building. Still same\nfailure, though. Looking more closely at the manpage for madvise, it has\nthis under MADV_HUGEPAGE:\n\n\"The MADV_HUGEPAGE, MADV_NOHUGEPAGE, and MADV_COLLAPSE operations are\navailable only if the kernel was configured with\nCONFIG_TRANSPARENT_HUGEPAGE and file/shmem memory is only supported if the\nkernel was configured with CONFIG_READ_ONLY_THP_FOR_FS.\"\n\nEarlier, I only checked the first config option but didn't know about the\nsecond...\n\n$ grep CONFIG_READ_ONLY_THP_FOR_FS /boot/config-$(uname -r)\n# CONFIG_READ_ONLY_THP_FOR_FS is not set\n\nApparently, it's experimental. That could be the explanation, but now I'm\nwondering why the fallback\n\nmadvise(addr, advlen, MADV_HUGEPAGE);\n\ndidn't also give an error. I wonder if we could mremap to some anonymous\nregion and call madvise on that. That would be more similar to the hack I\nshared last year, which may be more fragile, but now it wouldn't\nneed explicit huge pages.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 21, 2023 at 10:42 AM Andres Freund <andres@anarazel.de> wrote:> So I am wondering if you're encountering a different kind of problem. As I> mentioned, I have observed that the pages need to be clean for this to> work. For me adding a \"sync path/to/postgres\" makes it work on 6.3.8. Without> the sync it starts to work a while later (presumably when the kernel got> around to writing the data back).Hmm, then after rebooting today, it shouldn't have that problem until a build links again, but I'll make sure to do that when building. Still same failure, though. Looking more closely at the manpage for madvise, it has this under MADV_HUGEPAGE:\"The MADV_HUGEPAGE, MADV_NOHUGEPAGE, and MADV_COLLAPSE operations are available only if the kernel was configured with CONFIG_TRANSPARENT_HUGEPAGE and file/shmem memory is only supported if the kernel was configured with CONFIG_READ_ONLY_THP_FOR_FS.\"Earlier, I only checked the first config option but didn't know about the second...$ grep CONFIG_READ_ONLY_THP_FOR_FS /boot/config-$(uname -r)# CONFIG_READ_ONLY_THP_FOR_FS is not setApparently, it's experimental. That could be the explanation, but now I'm wondering why the fallbackmadvise(addr, advlen, MADV_HUGEPAGE);didn't also give an error. I wonder if we could mremap to some anonymous region and call madvise on that. That would be more similar to the hack I shared last year, which may be more fragile, but now it wouldn't need explicit huge pages.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 21 Jun 2023 15:06:43 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: remap the .text segment into huge pages at run time"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at the code in ri_PlanCheck\nof src/backend/utils/adt/ri_triggers.c starting at line 2289.\n\nWhen qplan is NULL, we log an error. This would skip\ncalling SetUserIdAndSecContext().\n\nI think the intention of the code should be restoring user id\nand SecContext regardless of the outcome from SPI_prepare().\n\nIf my understanding is correct, please take a look at the patch.\n\nThanks\n\nHi,I was looking at the code in ri_PlanCheck of src/backend/utils/adt/ri_triggers.c starting at line 2289.When qplan is NULL, we log an error. This would skip calling SetUserIdAndSecContext().I think the intention of the code should be restoring user id and SecContext regardless of the outcome from SPI_prepare().If my understanding is correct, please take a look at the patch.Thanks",
"msg_date": "Wed, 2 Nov 2022 07:04:37 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "restoring user id and SecContext before logging error in ri_PlanCheck"
},
{
"msg_contents": "Looking down in ri_PerformCheck(), I see there may be case where error from\nSPI_execute_snapshot() would skip restoring UID.\n\nPlease look at patch v2 which tried to handle such case.\n\nThanks",
"msg_date": "Wed, 2 Nov 2022 08:00:58 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: restoring user id and SecContext before logging error in\n ri_PlanCheck"
},
{
"msg_contents": "On Wed, Nov 02, 2022 at 08:00:58AM -0700, Zhihong Yu wrote:\n> Looking down in ri_PerformCheck(), I see there may be case where error from\n> SPI_execute_snapshot() would skip restoring UID.\n\n> @@ -2405,13 +2405,19 @@ ri_PerformCheck(const RI_ConstraintInfo *riinfo,\n> \t\t\t\t\t\t SECURITY_NOFORCE_RLS);\n> \n> \t/* Finally we can run the query. */\n> -\tspi_result = SPI_execute_snapshot(qplan,\n> -\t\t\t\t\t\t\t\t\t vals, nulls,\n> -\t\t\t\t\t\t\t\t\t test_snapshot, crosscheck_snapshot,\n> -\t\t\t\t\t\t\t\t\t false, false, limit);\n> -\n> -\t/* Restore UID and security context */\n> -\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\tPG_TRY();\n> +\t{\n> +\t\tspi_result = SPI_execute_snapshot(qplan,\n> +\t\t\t\t\t\t\t\t\t\t vals, nulls,\n> +\t\t\t\t\t\t\t\t\t\t test_snapshot, crosscheck_snapshot,\n> +\t\t\t\t\t\t\t\t\t\t false, false, limit);\n> +\t}\n> +\tPG_FINALLY();\n> +\t{\n> +\t\t/* Restore UID and security context */\n> +\t\tSetUserIdAndSecContext(save_userid, save_sec_context);\n> +\t}\n> +\tPG_END_TRY();\n\nAfter an error, AbortSubTransaction() or AbortTransaction() will restore\nuserid and sec_context. That makes such changes unnecessary.\n\n\n",
"msg_date": "Sun, 4 Dec 2022 16:56:29 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: restoring user id and SecContext before logging error in\n ri_PlanCheck"
}
] |
[
{
"msg_contents": "To Whom It May Concern;\n\nSome additional clarity in the versions 14/15 documentation would be helpful specifically surrounding the \"target_role\" clause for the ALTER DEFAULT PRIVILEGES command. To the uninitiated, the current description seems vague. Maybe something like the following would help:\n\ntarget_role\n The name of an existing role of which the current role is a member. Default privileges are only applied to objects created by the targeted role/user (FOR ROLE target_role). If the FOR ROLE clause is omitted, the targeted user defaults to the current user executing the ALTER DEFAULT PRIVILEGES command. The result can be seen using the following query:\n\nselect table_catalog as database\n ,table_schema\n ,table_name\n ,privilege_type\n ,grantee\n ,'revoke '||privilege_type||' on '||table_schema||'.'||table_name||' from '||grantee||';' as revoke_stmt\nfrom information_schema.table_privileges\nwhere table_schema = 'my_schema'\nand table_name = 'my_table'\norder by 1,2,3,5,4;\n\n\nAlso, additional explanation about the differences between global defaults versus schema-level defaults, and how to identify them, would be helpful.\n\nAdditional explanation about exactly what is happening would help to put this command into perspective. On successful execution with the correct parameter values, and using both the FOR ROLE and IN SCHEMA clauses, I also received privilege grants directed to the user executing the ALTER DEFAULT PRIVILEGES command. This was in addition to the expected privileges specified in the command. I'm not sure why this occurred or how to eliminate it, in the interest of establishing \"least privilege\" permissions.\n\nThank you.\n\n\nDavid E. Burns, Jr. | Domain Architect | FedEx Services IT | Dock and Edge Services | Mobile 412.304.8303\n1000 FedEx Drive, Moon Township, PA 15108 | david.burns@fedex.com<mailto:david.burns@fedex.com>\n\n\n\n\n\n\n\n\n\n\nTo Whom It May Concern;\n \nSome additional clarity in the versions 14/15 documentation would be helpful specifically surrounding the \"target_role\" clause for the ALTER DEFAULT PRIVILEGES command. To the uninitiated, the current description seems vague. Maybe something\n like the following would help:\n \ntarget_role\n The name of an existing role of which the current role is a member. Default privileges are only applied to objects created by the targeted role/user (FOR ROLE target_role). If the FOR ROLE clause is omitted, the targeted\n user defaults to the current user executing the ALTER DEFAULT PRIVILEGES command. The result can be seen using the following query:\n \nselect table_catalog as database\n ,table_schema\n ,table_name\n ,privilege_type\n ,grantee\n ,'revoke '||privilege_type||' on '||table_schema||'.'||table_name||' from '||grantee||';' as revoke_stmt\nfrom information_schema.table_privileges\nwhere table_schema = 'my_schema'\nand table_name = 'my_table'\norder by 1,2,3,5,4;\n \n \nAlso, additional explanation about the differences between global defaults versus schema-level defaults, and how to identify them, would be helpful.\n \nAdditional explanation about exactly what is happening would help to put this command into perspective. On successful execution with the correct parameter values, and using both the FOR ROLE and IN SCHEMA clauses, I also received privilege\n grants directed to the user executing the ALTER DEFAULT PRIVILEGES command. This was in addition to the expected privileges specified in the command. I'm not sure why this occurred or how to eliminate it, in the interest of establishing \"least privilege\"\n permissions.\n \nThank you.\n \n \nDavid E. Burns, Jr.\n| Domain Architect\n| FedEx Services IT | Dock and Edge Services\n| Mobile 412.304.8303\n\n1000 FedEx Drive,\nMoon Township, PA 15108\n| \ndavid.burns@fedex.com",
"msg_date": "Wed, 2 Nov 2022 19:29:49 +0000",
"msg_from": "David Burns <david.burns@fedex.com>",
"msg_from_op": true,
"msg_subject": "Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Wed, 2022-11-02 at 19:29 +0000, David Burns wrote:\n> To Whom It May Concern;\n\nIt concerns me, because I often see questions from people who misunderstand this.\n\n> Some additional clarity in the versions 14/15 documentation would be helpful specifically\n> surrounding the \"target_role\" clause for the ALTER DEFAULT PRIVILEGES command.\n> To the uninitiated, the current description seems vague. Maybe something like the following would help:\n> \n> target_role\n> The name of an existing role of which the current role is a member.\n> Default privileges are only applied to objects created by the targeted role/user (FOR ROLE target_role).\n> If the FOR ROLE clause is omitted, the targeted user defaults to the current user executing the\n> ALTER DEFAULT PRIVILEGES command.\n\n+1\n\nI like the wording, except that I would replace \"targeted role/user (FOR ROLE target_role)\" with\n\"target role\" for added clarity.\n\n> The result can be seen using the following query:\n> \n> select table_catalog as database\n> ,table_schema\n> ,table_name\n> ,privilege_type\n> ,grantee\n> ,'revoke '||privilege_type||' on '||table_schema||'.'||table_name||' from '||grantee||';' as revoke_stmt\n> from information_schema.table_privileges\n> where table_schema = 'my_schema'\n> and table_name = 'my_table'\n> order by 1,2,3,5,4;\n\nI am not so happy with that query; I thinks that is going too far.\nPerhaps we can say that the \"psql\" command \"\\ddp\" can be used to view default privileges.\n\n> Also, additional explanation about the differences between global defaults versus\n> schema-level defaults, and how to identify them, would be helpful.\n\nThe examples already cover that in some detail.\n\n> Additional explanation about exactly what is happening would help to put this command into perspective.\n> On successful execution with the correct parameter values, and using both the FOR ROLE and\n> IN SCHEMA clauses, I also received privilege grants directed to the user executing the\n> ALTER DEFAULT PRIVILEGES command. This was in addition to the expected privileges specified in the command.\n> I'm not sure why this occurred or how to eliminate it, in the interest of establishing \"least privilege\" permissions.\n\nALTER DEFAULT PRIVILEGES does nothing like that...\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 03 Nov 2022 11:32:48 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Thu, 2022-11-03 at 11:32 +0100, Laurenz Albe wrote:\n> On Wed, 2022-11-02 at 19:29 +0000, David Burns wrote:\n> > To Whom It May Concern;\n> \n> It concerns me, because I often see questions from people who misunderstand this.\n> \n> > Some additional clarity in the versions 14/15 documentation would be helpful specifically\n> > surrounding the \"target_role\" clause for the ALTER DEFAULT PRIVILEGES command.\n> > To the uninitiated, the current description seems vague. Maybe something like the following would help:\n> > \n> > target_role\n> > The name of an existing role of which the current role is a member.\n> > Default privileges are only applied to objects created by the targeted role/user (FOR ROLE target_role).\n> > If the FOR ROLE clause is omitted, the targeted user defaults to the current user executing the\n> > ALTER DEFAULT PRIVILEGES command.\n> \n> +1\n\nAfter some more thinking, I came up with the attached patch.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 04 Nov 2022 10:49:42 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Fri, 2022-11-04 at 10:49 +0100, Laurenz Albe wrote:\n> On Thu, 2022-11-03 at 11:32 +0100, Laurenz Albe wrote:\n> > On Wed, 2022-11-02 at 19:29 +0000, David Burns wrote:\n> > \n> > > Some additional clarity in the versions 14/15 documentation would be helpful specifically\n> > > surrounding the \"target_role\" clause for the ALTER DEFAULT PRIVILEGES command.\n> > > To the uninitiated, the current description seems vague. Maybe something like the following would help:\n> \n> After some more thinking, I came up with the attached patch.\n\nI'm sending a reply to the hackers list, so that I can add the patch to the commitfest.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 27 Oct 2023 09:03:04 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 27, 2023 at 09:03:04AM +0200, Laurenz Albe wrote:\n> On Fri, 2022-11-04 at 10:49 +0100, Laurenz Albe wrote:\n> > On Thu, 2022-11-03 at 11:32 +0100, Laurenz Albe wrote:\n> > > On Wed, 2022-11-02 at 19:29 +0000, David Burns wrote:\n> > > \n> > > > Some additional clarity in the versions 14/15 documentation\n> > > > would be helpful specifically surrounding the \"target_role\"\n> > > > clause for the ALTER DEFAULT PRIVILEGES command. To the\n> > > > uninitiated, the current description seems vague.� Maybe\n> > > > something like the following would help:\n> > \n> > After some more thinking, I came up with the attached patch.\n> \n> I'm sending a reply to the hackers list, so that I can add the patch\n> to the commitfest.\n\nI think something like this is highly useful because I have also seen\npeople very confused why default privileges are not applied.\n\nHowever, maybe it could be made even clearer if also the main\ndescription is amended, like\n\n\"You can change default privileges only for objects that will be created\nby yourself or by roles that you are a member of (via target_role).\"\n\nor something.\n\n\nMichael\n\n\n",
"msg_date": "Fri, 27 Oct 2023 11:34:20 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Fri, 2023-10-27 at 11:34 +0200, Michael Banck wrote:\n> On Fri, Oct 27, 2023 at 09:03:04AM +0200, Laurenz Albe wrote:\n> > On Fri, 2022-11-04 at 10:49 +0100, Laurenz Albe wrote:\n> > > On Thu, 2022-11-03 at 11:32 +0100, Laurenz Albe wrote:\n> > > > On Wed, 2022-11-02 at 19:29 +0000, David Burns wrote:\n> > > > > Some additional clarity in the versions 14/15 documentation\n> > > > > would be helpful specifically surrounding the \"target_role\"\n> > > > > clause for the ALTER DEFAULT PRIVILEGES command. To the\n> > > > > uninitiated, the current description seems vague. Maybe\n> > > > > something like the following would help:\n> > > \n> > > After some more thinking, I came up with the attached patch.\n> > \n> I think something like this is highly useful because I have also seen\n> people very confused why default privileges are not applied.\n> \n> However, maybe it could be made even clearer if also the main\n> description is amended, like\n> \n> \"You can change default privileges only for objects that will be created\n> by yourself or by roles that you are a member of (via target_role).\"\n> \n> or something.\n\nTrue. I have done that in the attached patch.\nIn this patch, it is mentioned *twice* that ALTER DEFAULT PRIVILEGES\nonly affects objects created by the current user. I thought that\nwould not harm, but if it is too redundant, I can remove the second\nmention.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 27 Oct 2023 17:49:42 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "Hi,\n\nOn Fri, Oct 27, 2023 at 05:49:42PM +0200, Laurenz Albe wrote:\n> On Fri, 2023-10-27 at 11:34 +0200, Michael Banck wrote:\n> > On Fri, Oct 27, 2023 at 09:03:04AM +0200, Laurenz Albe wrote:\n> > > On Fri, 2022-11-04 at 10:49 +0100, Laurenz Albe wrote:\n> > > > On Thu, 2022-11-03 at 11:32 +0100, Laurenz Albe wrote:\n> > > > > On Wed, 2022-11-02 at 19:29 +0000, David Burns wrote:\n> > > > > > Some additional clarity in the versions 14/15 documentation\n> > > > > > would be helpful specifically surrounding the \"target_role\"\n> > > > > > clause for the ALTER DEFAULT PRIVILEGES command. To the\n> > > > > > uninitiated, the current description seems vague.� Maybe\n> > > > > > something like the following would help:\n> > > > \n> > > > After some more thinking, I came up with the attached patch.\n> > > \n> > I think something like this is highly useful because I have also seen\n> > people very confused why default privileges are not applied.\n> > \n> > However, maybe it could be made even clearer if also the main\n> > description is amended, like\n> > \n> > \"You can change default privileges only for objects that will be created\n> > by yourself or by roles that you are a member of (via target_role).\"\n> > \n> > or something.\n> \n> True. I have done that in the attached patch.\n> In this patch, it is mentioned *twice* that ALTER DEFAULT PRIVILEGES\n> only affects objects created by the current user. I thought that\n> would not harm, but if it is too redundant, I can remove the second\n> mention.\n\nI think it is fine, and I have marked the patch as ready-for-committer.\n\nI think it should be applied to all branches, not just 14/15 as\nmentioned in the subject.\n\n\nMichael\n\n\n",
"msg_date": "Sat, 28 Oct 2023 11:01:59 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Sat, Oct 28, 2023 at 11:01:59AM +0200, Michael Banck wrote:\n> On Fri, Oct 27, 2023 at 05:49:42PM +0200, Laurenz Albe wrote:\n> > True. I have done that in the attached patch.\n> > In this patch, it is mentioned *twice* that ALTER DEFAULT PRIVILEGES\n> > only affects objects created by the current user. I thought that\n> > would not harm, but if it is too redundant, I can remove the second\n> > mention.\n> \n> I think it is fine, and I have marked the patch as ready-for-committer.\n> \n> I think it should be applied to all branches, not just 14/15 as\n> mentioned in the subject.\n\nI have developed the attached patch on top of the alter default patch I\njust applied. It is more radical, making FOR ROLE clearer, and also\nmoving my new FOR ROLE text up to the first paragraph, and reordering\nthe paragraphs to be clearer.\n\nI think this is too radical for backpatch to 11/12, but I think\n16/master makes sense after the minor releases next week.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Fri, 3 Nov 2023 12:53:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Fri, 2023-11-03 at 12:53 -0400, Bruce Momjian wrote:\n> I have developed the attached patch on top of the alter default patch I\n> just applied. It is more radical, making FOR ROLE clearer, and also\n> moving my new FOR ROLE text up to the first paragraph, and reordering\n> the paragraphs to be clearer.\n> \n> I think this is too radical for backpatch to 11/12, but I think\n> 16/master makes sense after the minor releases next week.\n\nI think it is a good idea to move part of the text to a new paragraph.\n\n> --- a/doc/src/sgml/ref/alter_default_privileges.sgml\n> +++ b/doc/src/sgml/ref/alter_default_privileges.sgml\n> @@ -90,23 +90,14 @@ REVOKE [ GRANT OPTION FOR ]\n> [...]\n> + As a non-superuser, you can change default privileges only for yourself\n> + and for roles that you are a member of. These privileges are not\n> + inherited, so member roles must use <command>SET ROLE</command> to\n> + access these privileges, or <command>ALTER DEFAULT PRIVILEGES</command>\n> + must be run for each member role. Privileges can be set globally\n> + (i.e., for all objects created in the current database), or just for\n> + objects created in specified schemas.\n\nThat this paragraph is not clear enough about who gets the privileges and\nwho creates the objects, and that is one of the difficulties in understanding\nALTER DEFAULT PRIVILEGES.\n\nPerhaps:\n\n <para>\n <command>ALTER DEFAULT PRIVILEGES</command> allows you to set the privileges\n that will be applied to objects created in the future. (It does not\n affect privileges assigned to already-existing objects.) Privileges can be\n set globally (i.e., for all objects created in the current database), or\n just for objects created in specified schemas.\n </para>\n\n <para>\n As a non-superuser, you can change default privileges only on objects created\n by yourself or by roles that you are a member of. If you alter the default\n privileges for a role, only objects created by that role will be affected.\n It is not sufficient to be a member of that role; member roles must use\n <command>SET ROLE</command> to assume the identity of the role for which\n default privileges were altered.\n </para>\n\n <para>\n There is no way to change the default privileges for objects created by\n any role. You have run <command>ALTER DEFAULT PRIVILEGES</command> for all\n roles that can create objects whose default privileges should be modified.\n </para>\n\n> @@ -136,12 +140,9 @@ REVOKE [ GRANT OPTION FOR ]\n> <term><replaceable>target_role</replaceable></term>\n> <listitem>\n> <para>\n> - The name of an existing role of which the current role is a member.\n> - Default access privileges are not inherited, so member roles\n> - must use <command>SET ROLE</command> to access these privileges,\n> - or <command>ALTER DEFAULT PRIVILEGES</command> must be run for\n> - each member role. If <literal>FOR ROLE</literal> is omitted,\n> - the current role is assumed.\n> + If <literal>FOR ROLE</literal> is specified, this is the role that\n> + will be assigned the new default privileges, or the current role\n> + if not specified.\n\nThis is downright wrong; the \"target_role\" will *not* be assigned any\nprivileges.\n\nPerhaps:\n\n <para>\n Default privileges are changed only for objects created by\n <replaceable>target_role</replaceable>. If <literal>FOR ROLE</literal>\n is omitted, the current role is assumed.\n </para>\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 04 Nov 2023 07:05:28 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Sat, Nov 4, 2023 at 07:05:28AM +0100, Laurenz Albe wrote:\n> On Fri, 2023-11-03 at 12:53 -0400, Bruce Momjian wrote:\n> > I have developed the attached patch on top of the alter default patch I\n> > just applied. It is more radical, making FOR ROLE clearer, and also\n> > moving my new FOR ROLE text up to the first paragraph, and reordering\n> > the paragraphs to be clearer.\n> > \n> > I think this is too radical for backpatch to 11/12, but I think\n> > 16/master makes sense after the minor releases next week.\n> \n> I think it is a good idea to move part of the text to a new paragraph.\n\nYeah, kind of radical but I think it needed to be done.\n\n> > --- a/doc/src/sgml/ref/alter_default_privileges.sgml\n> > +++ b/doc/src/sgml/ref/alter_default_privileges.sgml\n> > @@ -90,23 +90,14 @@ REVOKE [ GRANT OPTION FOR ]\n> > [...]\n> > + As a non-superuser, you can change default privileges only for yourself\n> > + and for roles that you are a member of. These privileges are not\n> > + inherited, so member roles must use <command>SET ROLE</command> to\n> > + access these privileges, or <command>ALTER DEFAULT PRIVILEGES</command>\n> > + must be run for each member role. Privileges can be set globally\n> > + (i.e., for all objects created in the current database), or just for\n> > + objects created in specified schemas.\n> \n> That this paragraph is not clear enough about who gets the privileges and\n> who creates the objects, and that is one of the difficulties in understanding\n> ALTER DEFAULT PRIVILEGES.\n\nYes, I like your new paragraphs better than I what I had.\n\n> This is downright wrong; the \"target_role\" will *not* be assigned any\n> privileges.\n> \n> Perhaps:\n> \n> <para>\n> Default privileges are changed only for objects created by\n> <replaceable>target_role</replaceable>. If <literal>FOR ROLE</literal>\n> is omitted, the current role is assumed.\n> </para>\n\nYes, I see your point. Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 4 Nov 2023 14:20:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Sat, 2023-11-04 at 14:20 -0400, Bruce Momjian wrote:\n> Yes, I see your point. Updated patch attached.\n\nAlmost perfect, except:\n\n+ Change default privileges for objects created by\n+ <replaceable>target_role</replaceable>; if omitted, the current\n+ role is modified.\n\nIt is not the role that is modified. Perhaps:\n\n [...]; if omitted, the current role is used.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 04 Nov 2023 22:12:42 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Sat, Nov 4, 2023 at 10:12:42PM +0100, Laurenz Albe wrote:\n> On Sat, 2023-11-04 at 14:20 -0400, Bruce Momjian wrote:\n> > Yes, I see your point. Updated patch attached.\n> \n> Almost perfect, except:\n> \n> + Change default privileges for objects created by\n> + <replaceable>target_role</replaceable>; if omitted, the current\n> + role is modified.\n> \n> It is not the role that is modified. Perhaps:\n> \n> [...]; if omitted, the current role is used.\n\nSure, attached. Here is the issue I have though, we are really not\nchanging default privileges for objects created in the future, we are\nchanging the role _now_ so future objects will have different default\nprivileges, right? I think wording like the above is kind of odd.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Sat, 4 Nov 2023 21:14:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "Hi,\n\nOn Sat, Nov 04, 2023 at 09:14:01PM -0400, Bruce Momjian wrote:\n> + There is no way to change the default privileges for objects created by\n> + any role. You have run <command>ALTER DEFAULT PRIVILEGES</command> for all\n> + roles that can create objects whose default privileges should be modified.\n\nThat second sentence is broken, it should be \"You have to run [...]\" I\nthink.\n\n\nMichael\n\n\n",
"msg_date": "Mon, 6 Nov 2023 09:32:27 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Sat, 2023-11-04 at 21:14 -0400, Bruce Momjian wrote:\n> > It is not the role that is modified. Perhaps:\n> > \n> > [...]; if omitted, the current role is used.\n> \n> Sure, attached. Here is the issue I have though, we are really not\n> changing default privileges for objects created in the future, we are\n> changing the role _now_ so future objects will have different default\n> privileges, right? I think wording like the above is kind of odd.\n\nI see what you mean. The alternative is to be precise, at the risk of\nrepeating ourselves:\n\n if omitted, default privileges will be changed for objects created by\n the current role.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 06 Nov 2023 09:44:14 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 09:32:27AM +0100, Michael Banck wrote:\n> Hi,\n> \n> On Sat, Nov 04, 2023 at 09:14:01PM -0400, Bruce Momjian wrote:\n> > + There is no way to change the default privileges for objects created by\n> > + any role. You have run <command>ALTER DEFAULT PRIVILEGES</command> for all\n> > + roles that can create objects whose default privileges should be modified.\n> \n> That second sentence is broken, it should be \"You have to run [...]\" I\n> think.\n\nAgreed, fixed, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 6 Nov 2023 10:26:16 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 09:44:14AM +0100, Laurenz Albe wrote:\n> On Sat, 2023-11-04 at 21:14 -0400, Bruce Momjian wrote:\n> > > It is not the role that is modified. Perhaps:\n> > > \n> > > [...]; if omitted, the current role is used.\n> > \n> > Sure, attached. Here is the issue I have though, we are really not\n> > changing default privileges for objects created in the future, we are\n> > changing the role _now_ so future objects will have different default\n> > privileges, right? I think wording like the above is kind of odd.\n> \n> I see what you mean. The alternative is to be precise, at the risk of\n> repeating ourselves:\n> \n> if omitted, default privileges will be changed for objects created by\n> the current role.\n\nOkay, I think I have good wording for this. I didn't like the wording\nof other roles, so I restructured that in the attached patch too.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 6 Nov 2023 10:55:55 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Mon, 2023-11-06 at 10:55 -0500, Bruce Momjian wrote:\n> Okay, I think I have good wording for this. I didn't like the wording\n> of other roles, so I restructured that in the attached patch too.\n\n> <para>\n> ! Default privileges apply only to the active role; the default\n> ! privileges of member roles have no affect on object permissions.\n> ! <command>SET ROLE</command> can be used to change the active user and\n> ! apply their default privileges.\n> ! </para>\n\nYou don't mean member roles, but roles that the active role is a member of,\nright?\n\nHow do you like my version, as attached?\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 06 Nov 2023 21:53:50 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Mon, Nov 6, 2023 at 09:53:50PM +0100, Laurenz Albe wrote:\n> On Mon, 2023-11-06 at 10:55 -0500, Bruce Momjian wrote:\n> > Okay, I think I have good wording for this. I didn't like the wording\n> > of other roles, so I restructured that in the attached patch too.\n> \n> > <para>\n> > ! Default privileges apply only to the active role; the default\n> > ! privileges of member roles have no affect on object permissions.\n> > ! <command>SET ROLE</command> can be used to change the active user and\n> > ! apply their default privileges.\n> > ! </para>\n> \n> You don't mean member roles, but roles that the active role is a member of,\n> right?\n\nYes, sorry fixed in the attached patch.\n\n> + <para>\n> + As a non-superuser, you can change default privileges only on objects created\n> + by yourself or by roles that you are a member of. However, you don't inherit\n> + altered default privileges from roles you are a member of; objects you create\n> + will receive the default privileges for your current role.\n> + </para>\n\nI went with different wording since I found the above confusing.\n\nYou didn't seem to like my SET ROLE suggestion so I removed it.\n\n> +\n> + <para>\n> + There is no way to change the default privileges for objects created by\n> + arbitrary roles. You have run <command>ALTER DEFAULT PRIVILEGES</command>\n\nI find the above sentence odd. What is its purpose?\n\n> + for any role that can create objects whose default privileges should be\n> + modified.\n> + </para>\n> +\n> + <para>\n> + Currently,\n> + only the privileges for schemas, tables (including views and foreign\n> + tables), sequences, functions, and types (including domains) can be\n> + altered. For this command, functions include aggregates and procedures.\n> + The words <literal>FUNCTIONS</literal> and <literal>ROUTINES</literal> are\n> + equivalent in this command. (<literal>ROUTINES</literal> is preferred\n> + going forward as the standard term for functions and procedures taken\n> + together. In earlier PostgreSQL releases, only the\n> + word <literal>FUNCTIONS</literal> was allowed. It is not possible to set\n> + default privileges for functions and procedures separately.)\n> + </para>\n> +\n> <para>\n> Default privileges that are specified per-schema are added to whatever\n> the global default privileges are for the particular object type.\n> @@ -136,8 +149,9 @@ REVOKE [ GRANT OPTION FOR ]\n> <term><replaceable>target_role</replaceable></term>\n> <listitem>\n> <para>\n> - The name of an existing role of which the current role is a member.\n> - If <literal>FOR ROLE</literal> is omitted, the current role is assumed.\n> + Default privileges are changed for objects created by the\n> + <replaceable>target_role</replaceable>, or the current\n> + role if unspecified.\n\nI like a verb to be first, like \"Change\" rather than \"default\nprivileges\".\n\nPatch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 7 Nov 2023 17:30:20 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "Hi,\n\nOn Tue, Nov 07, 2023 at 05:30:20PM -0500, Bruce Momjian wrote:\n> On Mon, Nov 6, 2023 at 09:53:50PM +0100, Laurenz Albe wrote:\n> > + <para>\n> > + There is no way to change the default privileges for objects created by\n> > + arbitrary roles. You have run <command>ALTER DEFAULT PRIVILEGES</command>\n> \n> I find the above sentence odd. What is its purpose?\n\nI guess it is to address the main purpose of this patch/confusion with\nusers: they believe setting DEFAULT PRIVILEGES will set grants\naccordingly for all objects created in the future, no matter who creates\nthem. So hammering in that this is not the case seems fine from my side\n(modulo the \"You have to run\" typo).\n\n\nMichael\n\n\n",
"msg_date": "Wed, 8 Nov 2023 07:56:02 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Tue, 2023-11-07 at 17:30 -0500, Bruce Momjian wrote:\n> You didn't seem to like my SET ROLE suggestion so I removed it.\n\nI thought that the information that you can use SET ROLE to assume\nthe identity of another role is correct, but leads a bit too far\nin the manual page of ALTER DEFAULT PRIVILEGES.\n\n> > + <para>\n> > + There is no way to change the default privileges for objects created by\n> > + arbitrary roles. You have run <command>ALTER DEFAULT PRIVILEGES</command>\n> \n> I find the above sentence odd. What is its purpose?\n\nI cannot count how many times I have seen the complaint \"I have run ALTER DEFAULT\nPRIVILEGES, and now when some other user creates a table, the permissions are\nunchanged\". People tend to think that if you omit FOR ROLE, the change applies to\nPUBLIC.\n\nYour improved documentation of \"target_role\" already covers that somewhat, so if\nyou don't like the repetition, I'm alright with that. I just thought it might\nbe worth stating it explicitly.\n\nI think your patch is fine and ready to go.\n\nYours,\nLaurenz Albe\n> \n\n\n",
"msg_date": "Wed, 08 Nov 2023 13:12:24 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 01:12:24PM +0100, Laurenz Albe wrote:\n> On Tue, 2023-11-07 at 17:30 -0500, Bruce Momjian wrote:\n> > You didn't seem to like my SET ROLE suggestion so I removed it.\n> \n> I thought that the information that you can use SET ROLE to assume\n> the identity of another role is correct, but leads a bit too far\n> in the manual page of ALTER DEFAULT PRIVILEGES.\n\nAgreed, it was a stretch.\n\n> > > + <para>\n> > > + There is no way to change the default privileges for objects created by\n> > > + arbitrary roles. You have run <command>ALTER DEFAULT PRIVILEGES</command>\n> > \n> > I find the above sentence odd. What is its purpose?\n> \n> I cannot count how many times I have seen the complaint \"I have run ALTER DEFAULT\n> PRIVILEGES, and now when some other user creates a table, the permissions are\n> unchanged\". People tend to think that if you omit FOR ROLE, the change applies to\n> PUBLIC.\n\nI agree we have to be clear, and this is complex, which is why we are\nstruggling. I feel we have to be clear about who is allowed to modify\nwhich default privileges, and what default privileges are active during\nobject creation. I ended up basically saying you can modify the default\nprivileges of roles you are member of, but they don't apply at creation\ntime for your own role. I am open to better wording.\n\n> Your improved documentation of \"target_role\" already covers that somewhat, so if\n> you don't like the repetition, I'm alright with that. I just thought it might\n> be worth stating it explicitly.\n> \n> I think your patch is fine and ready to go.\n\nThanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 12:42:02 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 01:12:24PM +0100, Laurenz Albe wrote:\n> > I find the above sentence odd. What is its purpose?\n> \n> I cannot count how many times I have seen the complaint \"I have run ALTER DEFAULT\n> PRIVILEGES, and now when some other user creates a table, the permissions are\n> unchanged\". People tend to think that if you omit FOR ROLE, the change applies to\n> PUBLIC.\n> \n> Your improved documentation of \"target_role\" already covers that somewhat, so if\n> you don't like the repetition, I'm alright with that. I just thought it might\n> be worth stating it explicitly.\n> \n> I think your patch is fine and ready to go.\n\nPatch applied back to PG 16.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:28:05 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Mon, 2023-11-13 at 14:28 -0500, Bruce Momjian wrote:\n> Patch applied back to PG 16.\n\nGreat thanks!\n\nI am hopeful that that will reduce people's confusion about this feature.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 13 Nov 2023 20:33:33 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 08:33:33PM +0100, Laurenz Albe wrote:\n> On Mon, 2023-11-13 at 14:28 -0500, Bruce Momjian wrote:\n> > Patch applied back to PG 16.\n> \n> Great thanks!\n> \n> I am hopeful that that will reduce people's confusion about this feature.\n\nAgreed!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 14:45:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Version 14/15 documentation Section \"Alter Default Privileges\""
}
] |
[
{
"msg_contents": "The comments atop seem to indicate that we always accumulate\ninvalidation messages in top-level transactions which is neither\nrequired nor match with the code. This is introduced in the commit\nc55040ccd0 and I have observed it while working on a fix for commit\n16b1fe0037.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 3 Nov 2022 16:23:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix comments atop ReorderBufferAddInvalidations"
},
{
"msg_contents": "Hi\n\nOn Thu, Nov 3, 2022 at 7:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> The comments atop seem to indicate that we always accumulate\n> invalidation messages in top-level transactions which is neither\n> required nor match with the code. This is introduced in the commit\n> c55040ccd0 and I have observed it while working on a fix for commit\n> 16b1fe0037.\n\nThank you for the patch. It looks good to me.\n\nI think we can backpatch it to avoid confusion in future.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 4 Nov 2022 14:43:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments atop ReorderBufferAddInvalidations"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 3, 2022 at 7:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The comments atop seem to indicate that we always accumulate\n> > invalidation messages in top-level transactions which is neither\n> > required nor match with the code. This is introduced in the commit\n> > c55040ccd0 and I have observed it while working on a fix for commit\n> > 16b1fe0037.\n>\n> Thank you for the patch. It looks good to me.\n>\n> I think we can backpatch it to avoid confusion in future.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Nov 2022 18:12:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix comments atop ReorderBufferAddInvalidations"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on an extension, I found that simplehash.h is missing \nexplicit casts in four places. Without these casts, compiling code \nincluding simplehash.h yields warnings if the code is compiled with \n-Wc++-compat.\n\nPostgreSQL seems to mostly prefer omitting the explicit casts, however \nthere are many places where an explicit cast is actually used. Among \nmany others, see e.g.\n\nbool.c:\n state = (BoolAggState *) MemoryContextAlloc(agg_context, \nsizeof(BoolAggState));\n\ndomains.c:\n my_extra = (DomainIOData *) MemoryContextAlloc(mcxt, \nsizeof(DomainIOData));\n\nWhat about, while not being strictly necessary for PostgreSQL itself, \nalso adding such casts to simplehash.h so that it can be used in code \nwhere -Wc++-compat is enabled?\n\nAttached is a small patch that adds the aforementioned casts.\nThanks for your consideration!\n\n--\nDavid Geier\n(ServiceNow)",
"msg_date": "Thu, 3 Nov 2022 12:34:41 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add explicit casts in four places to simplehash.h"
},
{
"msg_contents": "David Geier <geidav.pg@gmail.com> writes:\n> What about, while not being strictly necessary for PostgreSQL itself, \n> also adding such casts to simplehash.h so that it can be used in code \n> where -Wc++-compat is enabled?\n\nSeems reasonable, so done (I fixed one additional spot you missed).\n\nThe bigger picture here is that we do actually endeavor to keep\n(most of) our headers C++ clean, but our tool cpluspluscheck misses\nthese problems because it doesn't try to use these macros.\nI wonder whether there is a way to do better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Nov 2022 10:50:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add explicit casts in four places to simplehash.h"
},
{
"msg_contents": "On 11/3/22 15:50, Tom Lane wrote:\n> Seems reasonable, so done (I fixed one additional spot you missed).\nThanks!\n> The bigger picture here is that we do actually endeavor to keep\n> (most of) our headers C++ clean, but our tool cpluspluscheck misses\n> these problems because it doesn't try to use these macros.\n> I wonder whether there is a way to do better.\n\nWhat about having a custom header alongside cpluspluscheck which \nreferences all macros we care about?\nWe could start with the really big macros like the ones from \nsimplehash.h and add as we go.\nI could give this a try if deemed useful.\n\n--\nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:23:54 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add explicit casts in four places to simplehash.h"
},
{
"msg_contents": "David Geier <geidav.pg@gmail.com> writes:\n> On 11/3/22 15:50, Tom Lane wrote:\n>> The bigger picture here is that we do actually endeavor to keep\n>> (most of) our headers C++ clean, but our tool cpluspluscheck misses\n>> these problems because it doesn't try to use these macros.\n>> I wonder whether there is a way to do better.\n\n> What about having a custom header alongside cpluspluscheck which \n> references all macros we care about?\n\nCan't see that that would help, because of the hand maintenance\nrequired to make it useful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Nov 2022 09:25:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add explicit casts in four places to simplehash.h"
}
] |
[
{
"msg_contents": "Hello,\n\nCurrently pg_rewind refuses to run if full_page_writes is off. This is \nto prevent it to run into a torn page during operation.\n\nThis is usually a good call, but some file systems like ZFS are \nnaturally immune to torn page (maybe btrfs too, but I don't know for \nsure for this one).\n\nHaving the option to use pg_rewind without the cost associated with \nfull_page_writes when using a system immune to torn page is beneficial: \nincreased performance and more compact WAL.\n\nThis patch adds a new option \"--no-ensure-full-page-writes\" to pg_rewind \nfor this situation, as well as patched documentation.\n\nRegards,\nJeremie Grauer",
"msg_date": "Thu, 3 Nov 2022 16:54:13 +0100",
"msg_from": "=?UTF-8?Q?J=c3=a9r=c3=a9mie_Grauer?= <jeremie.grauer@cosium.com>",
"msg_from_op": true,
"msg_subject": "new option to allow pg_rewind to run without full_page_writes"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-03 16:54:13 +0100, J�r�mie Grauer wrote:\n> Currently pg_rewind refuses to run if full_page_writes is off. This is to\n> prevent it to run into a torn page during operation.\n>\n> This is usually a good call, but some file systems like ZFS are naturally\n> immune to torn page (maybe btrfs too, but I don't know for sure for this\n> one).\n\nNote that this isn't about torn pages in case of crashes, but about reading\npages while they're being written to.\n\nRight now, that definitely allows for torn reads, because of the way\npg_read_binary_file() is implemented. We only ensure a 4k read size from the\nview of our code, which obviously can lead to torn 8k page reads, no matter\nwhat the filesystem guarantees.\n\nAlso, for reasons I don't understand we use C streaming IO or\npg_read_binary_file(), so you'd also need to ensure that the buffer size used\nby the stream implementation can't cause the reads to happen in smaller\nchunks. Afaict we really shouldn't use file streams here, then we'd at least\nhave control over that aspect.\n\n\nDoes ZFS actually guarantee that there never can be short reads? As soon as\nthey are possible, full page writes are needed.\n\n\n\nThis isn't an fundamental issue - we could have a version of\npg_read_binary_file() for relation data that prevents the page being written\nout concurrently by locking the buffer page. In addition it could often avoid\nneeding to read the page from the OS / disk, if present in shared buffers\n(perhaps minus cases where we haven't flushed the WAL yet, but we could also\nflush the WAL in those).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Nov 2022 19:38:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: new option to allow pg_rewind to run without full_page_writes"
},
{
"msg_contents": "Hello,\n\nFirst, thank you for reviewing.\n\nZFS writes files in increment of its configured recordsize for the \ncurrent filesystem dataset.\n\nSo with a recordsize configured to be a multiple of 8K, you can't get \ntorn pages on writes, that's why full_page_writes can be safely \ndeactivated on ZFS (the usual advice is to configure ZFS with a \nrecordsize of 8K for postgres, but on some workloads, it can actually be \nbeneficial to go to a higher multiple of 8K).\n\nOn 06/11/2022 03:38, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-03 16:54:13 +0100, Jérémie Grauer wrote:\n>> Currently pg_rewind refuses to run if full_page_writes is off. This is to\n>> prevent it to run into a torn page during operation.\n>>\n>> This is usually a good call, but some file systems like ZFS are naturally\n>> immune to torn page (maybe btrfs too, but I don't know for sure for this\n>> one).\n> \n> Note that this isn't about torn pages in case of crashes, but about reading\n> pages while they're being written to.\nLike I wrote above, ZFS will prevent torn pages on writes, like \nfull_page_writes does.\n> \n> Right now, that definitely allows for torn reads, because of the way\n> pg_read_binary_file() is implemented. We only ensure a 4k read size from the\n> view of our code, which obviously can lead to torn 8k page reads, no matter\n> what the filesystem guarantees.\n> \n> Also, for reasons I don't understand we use C streaming IO or\n> pg_read_binary_file(), so you'd also need to ensure that the buffer size used\n> by the stream implementation can't cause the reads to happen in smaller\n> chunks. Afaict we really shouldn't use file streams here, then we'd at least\n> have control over that aspect.\n> \n> \n> Does ZFS actually guarantee that there never can be short reads? As soon as\n> they are possible, full page writes are neededI may be missing something here: how does full_page_writes prevents \nshort _reads_ ?\n\nPresumably, if we do something like read the first 4K of a file, then \nchange the file, then read the next 4K, the second 4K may be a torn \nread. But I fail to see how full_page_writes prevents this since it only \nact on writes>\n> This isn't an fundamental issue - we could have a version of\n> pg_read_binary_file() for relation data that prevents the page being written\n> out concurrently by locking the buffer page. In addition it could often avoid\n> needing to read the page from the OS / disk, if present in shared buffers\n> (perhaps minus cases where we haven't flushed the WAL yet, but we could also\n> flush the WAL in those).\n>I agree, but this would need a differen patch, which may be beyond my \nskills.\n> Greetings,\n> \n> Andres Freund\nAnyway, ZFS will act like full_page_writes is always active, so isn't \nthe proposed modification to pg_rewind valid?\n\nYou'll find attached a second version of the patch, which is cleaner \n(removed double negation).\n\nRegards,\nJérémie Grauer",
"msg_date": "Tue, 8 Nov 2022 00:07:09 +0100",
"msg_from": "=?UTF-8?Q?J=c3=a9r=c3=a9mie_Grauer?= <jeremie.grauer@cosium.com>",
"msg_from_op": true,
"msg_subject": "Re: new option to allow pg_rewind to run without full_page_writes"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 12:07 PM Jérémie Grauer\n<jeremie.grauer@cosium.com> wrote:\n> On 06/11/2022 03:38, Andres Freund wrote:\n> > On 2022-11-03 16:54:13 +0100, Jérémie Grauer wrote:\n> >> Currently pg_rewind refuses to run if full_page_writes is off. This is to\n> >> prevent it to run into a torn page during operation.\n> >>\n> >> This is usually a good call, but some file systems like ZFS are naturally\n> >> immune to torn page (maybe btrfs too, but I don't know for sure for this\n> >> one).\n> >\n> > Note that this isn't about torn pages in case of crashes, but about reading\n> > pages while they're being written to.\n\n> Like I wrote above, ZFS will prevent torn pages on writes, like\n> full_page_writes does.\n\nJust to spell out the distinction Andres was making, and maybe try to\nanswer a couple of questions if I can, there are two completely\ndifferent phenomena here:\n\n1. Generally full_page_writes is for handling a lack of atomic writes\non power loss, but ZFS already does that itself by virtue of its COW\ndesign and data-logging in certain cases.\n\n2. Here we are using full_page_writes to handle lack of atomicity\nwhen there are concurrent reads and writes to the same file from\ndifferent threads. Basically, by turning on full_page_writes we say\nthat we don't trust any block that might have been written to during\nthe copying. Again, ZFS already handles that for itself: it uses\nrange locking in the read and write paths (see zfs_rangelock_enter()\nin zfs_write() etc), BUT that's only going to work if the actual\npread()/pwrite() system calls that reach ZFS are aligned with\nPostgreSQL's pages.\n\nEvery now and then a discussion breaks out about WTF POSIX actually\nrequires WRT concurrent read/write, but it's trivial to show that the\nmost popular Linux filesystem exposes randomly mashed-up data from old\nand new versions of even small writes if you read while a write is\nconcurrently in progress[1], while many others don't. That's what the\n2nd thing is protecting against. I think it must be possible to show\nthat breaking on ZFS too, *if* the file regions arriving into system\ncalls are NOT correctly aligned. As Andres points out, <stdio.h>\nbuffered IO streams create a risk there: we have no idea what system\ncalls are reaching ZFS, so it doesn't seem safe to turn off full page\nwrites unless you also fix that.\n\n> > Does ZFS actually guarantee that there never can be short reads? As soon as\n> > they are possible, full page writes are neededI may be missing something here: how does full_page_writes prevents\n> short _reads_ ?\n\nI don't know, but I think the paranoid approach would be that if you\nget a short read, you go back and pread() at least that whole page, so\nall your system calls are fully aligned. Then I think you'd be safe?\nBecause zfs_read() does:\n\n /*\n * Lock the range against changes.\n */\n zfs_locked_range_t *lr = zfs_rangelock_enter(&zp->z_rangelock,\n zfs_uio_offset(uio), zfs_uio_resid(uio), RL_READER);\n\nSo it should be possible to make a safe version of this patch, by\nteaching the file-reading code to require BLCKSZ integrity for all\nreads.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2B19bZKidSiWmMsDmgUVe%3D_rr0m57LfR%2BnAbWprVDd_cw%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 8 Nov 2022 13:04:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: new option to allow pg_rewind to run without full_page_writes"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-08 00:07:09 +0100, J�r�mie Grauer wrote:\n> On 06/11/2022 03:38, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-11-03 16:54:13 +0100, J�r�mie Grauer wrote:\n> > > Currently pg_rewind refuses to run if full_page_writes is off. This is to\n> > > prevent it to run into a torn page during operation.\n> > >\n> > > This is usually a good call, but some file systems like ZFS are naturally\n> > > immune to torn page (maybe btrfs too, but I don't know for sure for this\n> > > one).\n> >\n> > Note that this isn't about torn pages in case of crashes, but about reading\n> > pages while they're being written to.\n\n> Like I wrote above, ZFS will prevent torn pages on writes, like\n> full_page_writes does.\n\nUnfortunately not relevant for pg_rewind due to the issues mentioned\nsubsequently.\n\n\n> > Right now, that definitely allows for torn reads, because of the way\n> > pg_read_binary_file() is implemented. We only ensure a 4k read size from the\n> > view of our code, which obviously can lead to torn 8k page reads, no matter\n> > what the filesystem guarantees.\n> >\n> > Also, for reasons I don't understand we use C streaming IO or\n> > pg_read_binary_file(), so you'd also need to ensure that the buffer size used\n> > by the stream implementation can't cause the reads to happen in smaller\n> > chunks. Afaict we really shouldn't use file streams here, then we'd at least\n> > have control over that aspect.\n> >\n> >\n> > Does ZFS actually guarantee that there never can be short reads? As soon as\n> > they are possible, full page writes are neededI may be missing something\n> > here: how does full_page_writes prevents\n\n> short _reads_ ?\n\nYes.\n\n\n> Presumably, if we do something like read the first 4K of a file, then change\n> the file, then read the next 4K, the second 4K may be a torn read.\n\nCorrect.\n\n\n> But I fail to see how full_page_writes prevents this since it only act on writes\n\nIt ensures the damage is later repaired during WAL replay. Which can only\nhappen if the WAL contains the necessary information to do so - the full page\nwrites.\n\n\nI suspect to avoid the need for this we'd need to atomically read all the\npages involved in a WAL record (presumably by locking the pages against\nIO). That'd then safely allow skipping replay of WAL records based on the LSN.a\n\nA slightly easier thing would be to force-enable full page writes just for the\nduration of a rewind, similar to what we do during base backups. But that'd\nstill require a bunch more work than done here.\n\n\n> > This isn't an fundamental issue - we could have a version of\n> > pg_read_binary_file() for relation data that prevents the page being written\n> > out concurrently by locking the buffer page. In addition it could often avoid\n> > needing to read the page from the OS / disk, if present in shared buffers\n> > (perhaps minus cases where we haven't flushed the WAL yet, but we could also\n> > flush the WAL in those).\n> > I agree, but this would need a differen patch, which may be beyond my\n> skills.\n> > Greetings,\n> >\n> > Andres Freund\n> Anyway, ZFS will act like full_page_writes is always active, so isn't the\n> proposed modification to pg_rewind valid?\n\nNo. This really isn't about the crash safety aspects of full page writes, so\nthe fact that ZFS is used is just not really relevant.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Mon, 7 Nov 2022 16:34:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: new option to allow pg_rewind to run without full_page_writes"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at the code in DecodeDateTime() around line 1382:\n\n tmask = DTK_M(type);\n\nIn case type is UNKNOWN_FIELD, the above macro would shift 1 left 31 bits\nwhich cannot be represented in type 'int'.\n\nLooking down in the same function, we can see that tmask is assigned for\nevery legitimate case.\n\nIf my understanding is correct, please take a look at the proposed patch.\n\nThanks",
"msg_date": "Thu, 3 Nov 2022 10:04:44 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "remove unnecessary assignment to tmask in DecodeDateTime"
}
] |
[
{
"msg_contents": "In the thread discussing the login event trigger patch it was argued that we\nwant to avoid recommending single-user mode for troubleshooting tasks, and a\nGUC for temporarily disabling event triggers was proposed.\n\nSince the login event trigger patch lost momentum, I've broken out the GUC part\ninto a separate patch to see if there is interest in that part alone, to chip\naway at situations requiring single-user mode.\n\nThe patch adds a new GUC, ignore_event_trigger with two option values, 'all'\nand 'none' (the login event patch had 'login' as well). This can easily be\nexpanded to have the different types of events, or pared down to a boolean\non/off. I think it makes more sense to make it more fine-grained but I think\nthere is merit in either direction.\n\nIf there is interest in this I'm happy to pursue a polished version of this\npatch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 3 Nov 2022 21:47:35 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 3 Nov 2022, at 21:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> The patch adds a new GUC, ignore_event_trigger with two option values, 'all'\n> and 'none' (the login event patch had 'login' as well).\n\nThe attached v2 fixes a small bug which caused testfailures the CFBot.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 29 Nov 2022 13:45:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Tue, 29 Nov 2022 at 18:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 3 Nov 2022, at 21:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > The patch adds a new GUC, ignore_event_trigger with two option values, 'all'\n> > and 'none' (the login event patch had 'login' as well).\n>\n> The attached v2 fixes a small bug which caused testfailures the CFBot.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\n5f6401f81cb24bd3930e0dc589fc4aa8b5424cdc ===\n=== applying patch\n./v2-0001-Add-GUC-for-temporarily-disabling-event-triggers.patch\npatching file doc/src/sgml/config.sgml\nHunk #1 succeeded at 9480 (offset 117 lines).\n.....\npatching file src/backend/utils/misc/postgresql.conf.sample\nHunk #1 FAILED at 701.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/utils/misc/postgresql.conf.sample.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4013.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:08:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 11 Jan 2023, at 17:38, vignesh C <vignesh21@gmail.com> wrote:\n> \n> On Tue, 29 Nov 2022 at 18:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 3 Nov 2022, at 21:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> The patch adds a new GUC, ignore_event_trigger with two option values, 'all'\n>>> and 'none' (the login event patch had 'login' as well).\n>> \n>> The attached v2 fixes a small bug which caused testfailures the CFBot.\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nThe attached rebased v3 fixes the conflict.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 12 Jan 2023 21:26:05 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 12:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 11 Jan 2023, at 17:38, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, 29 Nov 2022 at 18:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> On 3 Nov 2022, at 21:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> The patch adds a new GUC, ignore_event_trigger with two option values,\n> 'all'\n> >>> and 'none' (the login event patch had 'login' as well).\n> >>\n> >> The attached v2 fixes a small bug which caused testfailures the CFBot.\n> >\n> > The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n>\n> The attached rebased v3 fixes the conflict.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n> Hi,\n\n`this GUC allows to temporarily suspending event triggers.`\n\nIt would be better to mention the name of GUC in the description.\nTypo: suspending -> suspend\n\nw.r.t. guc `ignore_event_trigger`, since it is supposed to disable event\ntriggers for a short period of time, is there mechanism to turn it off\n(IGNORE_EVENT_TRIGGER_ALL) automatically ?\n\nCheers\n\nOn Thu, Jan 12, 2023 at 12:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 11 Jan 2023, at 17:38, vignesh C <vignesh21@gmail.com> wrote:\n> \n> On Tue, 29 Nov 2022 at 18:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 3 Nov 2022, at 21:47, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> The patch adds a new GUC, ignore_event_trigger with two option values, 'all'\n>>> and 'none' (the login event patch had 'login' as well).\n>> \n>> The attached v2 fixes a small bug which caused testfailures the CFBot.\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nThe attached rebased v3 fixes the conflict.\n\n--\nDaniel Gustafsson https://vmware.com/\nHi,`this GUC allows to temporarily suspending event triggers.`It would be better to mention the name of GUC in the description.Typo: suspending -> suspendw.r.t. guc `ignore_event_trigger`, since it is supposed to disable event triggers for a short period of time, is there mechanism to turn it off (IGNORE_EVENT_TRIGGER_ALL) automatically ?Cheers",
"msg_date": "Thu, 12 Jan 2023 13:04:56 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi Daniel,\r\n\r\nI have reviewed the patch and I liked it (well I did liked it already since it was a part of login trigger patch previously). \r\nAll tests are passed and the manual experiments with all types of event triggers are also passed.\r\n\r\nEverything is fine and I think It can be marked as Ready for Committer, although I have one final question.\r\nThere is a complete framework for disabling various types of the event triggers separately, but, the list of valid GUC values only include 'all' and 'none'. Why not adding support for all the event trigger types separately? Everything is already there in the patch; the only thing needed is expanding couple of enums. It's cheap in terms of code size and even cheaper in terms of performance. And moreover - it would be a good example for anyone adding new trigger types.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Fri, 27 Jan 2023 14:00:11 +0000",
"msg_from": "Mikhail Gribkov <youzhick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 27 Jan 2023, at 15:00, Mikhail Gribkov <youzhick@gmail.com> wrote:\n\n> There is a complete framework for disabling various types of the event triggers separately, but, the list of valid GUC values only include 'all' and 'none'. Why not adding support for all the event trigger types separately? Everything is already there in the patch; the only thing needed is expanding couple of enums. It's cheap in terms of code size and even cheaper in terms of performance. And moreover - it would be a good example for anyone adding new trigger types.\n\nI can't exactly recall my reasoning, but I do think you're right that if we're\nto have this GUC it should support the types of existing EVTs. The updated v4\nimplements that as well as a rebase on top of master and fixing a typo\ndiscovered upthread.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 6 Mar 2023 13:24:56 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nI like it now.\r\n\r\n* The patch does what it intends to do;\r\n* The implementation way is clear;\r\n* All test are passed;\r\n* No additional problems catched - at least by my eyes;\r\n\r\nI think it can be marked as Ready for Committer\r\n\r\nN.B. In fact I've encountered couple of failed tests during installcheck-world, although the same fails are there even for master branch. Thus doesn't seem to be this patch issue.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 07 Mar 2023 15:02:17 +0000",
"msg_from": "Mikhail Gribkov <youzhick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 7 Mar 2023, at 16:02, Mikhail Gribkov <youzhick@gmail.com> wrote:\n\n> * The patch does what it intends to do;\n> * The implementation way is clear;\n> * All test are passed;\n> * No additional problems catched - at least by my eyes;\n> \n> I think it can be marked as Ready for Committer\n\nThis patch has been RFC for some time, and has been all green in the CFbot. I\nwould like to go ahead with it this cycle since it gives a tool for admins to\navoid single-user mode - which is something we want to move away from. Even\nthough login event triggers aren't going in (which is where this originated),\nthere are still lots of ways to break things with other ev triggers.\n\nAny objections?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sun, 2 Apr 2023 21:24:33 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 01:24:56PM +0100, Daniel Gustafsson wrote:\n> > On 27 Jan 2023, at 15:00, Mikhail Gribkov <youzhick@gmail.com> wrote:\n> \n> > There is a complete framework for disabling various types of the event triggers separately, but, the list of valid GUC values only include 'all' and 'none'. Why not adding support for all the event trigger types separately? Everything is already there in the patch; the only thing needed is expanding couple of enums. It's cheap in terms of code size and even cheaper in terms of performance. And moreover - it would be a good example for anyone adding new trigger types.\n> \n> I can't exactly recall my reasoning, but I do think you're right that if we're\n> to have this GUC it should support the types of existing EVTs. The updated v4\n> implements that as well as a rebase on top of master and fixing a typo\n> discovered upthread.\n> \n+ gettext_noop(\"Disable event triggers for the duration of the session.\"),\n\nWhy does is it say \"for the duration of the session\" ?\n\nIt's possible to disable ignoring, and within the same session.\nGUCs are typically \"for the duration of the session\" .. but they don't\nsay so (and don't need to).\n\n+ elog(ERROR, \"unsupport event trigger: %d\", event);\n\ntypo: unsupported\n\n+ Allows to temporarily disable event triggers from executing in order\n\n=> Allow temporarily disabling execution of event triggers ..\n\n+ to troubleshoot and repair faulty event triggers. The value matches\n+ the type of event trigger to be ignored:\n+ <literal>ddl_command_start</literal>, <literal>ddl_command_end</literal>,\n+ <literal>table_rewrite</literal> and <literal>sql_drop</literal>.\n+ Additionally, all event triggers can be disabled by setting it to\n+ <literal>all</literal>. Setting the value to <literal>none</literal>\n+ will ensure that all event triggers are enabled, this is the default\n\nIt doesn't technically \"ensure that they're enabled\", since they can be\ndisabled by ALTER. Better to say that it \"doesn't disable any even triggers\".\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 2 Apr 2023 14:48:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 2 Apr 2023, at 21:48, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> + gettext_noop(\"Disable event triggers for the duration of the session.\"),\n> \n> Why does is it say \"for the duration of the session\" ?\n> \n> It's possible to disable ignoring, and within the same session.\n> GUCs are typically \"for the duration of the session\" .. but they don't\n> say so (and don't need to).\n> \n> + elog(ERROR, \"unsupport event trigger: %d\", event);\n> \n> typo: unsupported\n> \n> + Allows to temporarily disable event triggers from executing in order\n> \n> => Allow temporarily disabling execution of event triggers ..\n> \n> + to troubleshoot and repair faulty event triggers. The value matches\n> + the type of event trigger to be ignored:\n> + <literal>ddl_command_start</literal>, <literal>ddl_command_end</literal>,\n> + <literal>table_rewrite</literal> and <literal>sql_drop</literal>.\n> + Additionally, all event triggers can be disabled by setting it to\n> + <literal>all</literal>. Setting the value to <literal>none</literal>\n> + will ensure that all event triggers are enabled, this is the default\n> \n> It doesn't technically \"ensure that they're enabled\", since they can be\n> disabled by ALTER. Better to say that it \"doesn't disable any even triggers\".\n\nAll comments above addressed in the attached v5, thanks for review!\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 3 Apr 2023 14:45:53 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 8:46 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> All comments above addressed in the attached v5, thanks for review!\n\nI continue to think it's odd that the sense of this is inverted as\ncompared with row_security.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 09:09:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 3 Apr 2023, at 15:09, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Apr 3, 2023 at 8:46 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> All comments above addressed in the attached v5, thanks for review!\n> \n> I continue to think it's odd that the sense of this is inverted as\n> compared with row_security.\n\nI'm not sure I follow. Do you propose that the GUC enables classes of event\ntriggers, the default being \"all\" (or similar) and one would remove the type of\nEVT for which debugging is needed? That doesn't seem like a bad idea, just one\nthat hasn't come up in the discussion (and I didn't think about).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Apr 2023 15:15:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 9:15 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 3 Apr 2023, at 15:09, Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Mon, Apr 3, 2023 at 8:46 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> All comments above addressed in the attached v5, thanks for review!\n> >\n> > I continue to think it's odd that the sense of this is inverted as\n> > compared with row_security.\n>\n> I'm not sure I follow. Do you propose that the GUC enables classes of event\n> triggers, the default being \"all\" (or similar) and one would remove the type of\n> EVT for which debugging is needed? That doesn't seem like a bad idea, just one\n> that hasn't come up in the discussion (and I didn't think about).\n\nRight. Although to be fair, that idea doesn't sound as good if we're\ngoing to have settings other than \"on\" or \"off\". If this is just\ndisable_event_triggers = on | off, then why not flip the sense around\nand make it event_triggers = off | on, just as we do for row_security?\nBut if we're going to allow specific types of event triggers to be\ndisabled, and we think it's likely that people will want to disable\none specific type of event trigger while leaving the others alone,\nthat might not be very convenient, because you could end up having to\nlist all the things you do want instead of the one thing you don't\nwant. On the third hand, in other contexts, I've often advocating for\ngiving options a positive sense (what are we going to do?) rather than\na negative sense (what are we not going to do?). For example, the\nTIMING option to EXPLAIN was originally proposed with a name like\nDISABLE_TIMING or something, and the value inverted, and I said let's\nnot do that. And similarly in other places. A case where I didn't\nfollow that principle is VACUUM (DISABLE_PAGE_SKIPPING) which now\nseems like a wart to me. Why isn't it VACUUM (PAGE_SKIPPING) again\nwith the opposite value?\n\nI'm not sure what the best thing to do is here, I just think it\ndeserves some thought.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 10:09:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 3 Apr 2023, at 16:09, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Apr 3, 2023 at 9:15 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 3 Apr 2023, at 15:09, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>>> I continue to think it's odd that the sense of this is inverted as\n>>> compared with row_security.\n>> \n>> I'm not sure I follow. Do you propose that the GUC enables classes of event\n>> triggers, the default being \"all\" (or similar) and one would remove the type of\n>> EVT for which debugging is needed? That doesn't seem like a bad idea, just one\n>> that hasn't come up in the discussion (and I didn't think about).\n> \n> Right. Although to be fair, that idea doesn't sound as good if we're\n> going to have settings other than \"on\" or \"off\".\n\nYeah. The patch as it stands allow for disabling specific types rather than\nall-or-nothing, which is why the name was \"ignore\".\n\n> I'm not sure what the best thing to do is here, I just think it\n> deserves some thought.\n\nAbsolutely, the discussion is much appreciated. Having done some thinking I\nthink I'm still partial to framing it as a disabling GUC rather than an\nenabling; with the act of setting it being \"As an admin I want to skip\nexecution of all evt's of type X\". \n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Apr 2023 23:35:14 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Mon, Apr 03, 2023 at 11:35:14PM +0200, Daniel Gustafsson wrote:\n> Yeah. The patch as it stands allow for disabling specific types rather than\n> all-or-nothing, which is why the name was \"ignore\".\n\nFWIW, I agree with Robert's points here:\n- disable_event_triggers or ignore_event_triggers = off leads to a\ndouble-negative meaning, which is a positive. Depending on one's\nnative language that can be confusing.\n- Being able to write a list of event triggers working would be much\nmore interesting than just individual elements.\n- There may be an argument for negated patterns? Say,\na \"!sql_drop,!ddl_command_start\" would cause sql_drop and\nddl_command_start to be disabled with all the others enabled, and one\nshould not ne able to mix negated and non-negated patterns.\n\nA few days before the end of the commit fest, perhaps you'd better\nhead towards having only an event_trigger = on | off or all | none and\nconsider expanding that later on? From what I get at the top of the\nthread, this would satisfy the main use case you seemed to worry\nabout to begin with.\n--\nMichael",
"msg_date": "Wed, 5 Apr 2023 17:10:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 5 Apr 2023, at 10:10, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Apr 03, 2023 at 11:35:14PM +0200, Daniel Gustafsson wrote:\n>> Yeah. The patch as it stands allow for disabling specific types rather than\n>> all-or-nothing, which is why the name was \"ignore\".\n> \n> FWIW, I agree with Robert's points here:\n> - disable_event_triggers or ignore_event_triggers = off leads to a\n> double-negative meaning, which is a positive. Depending on one's\n> native language that can be confusing.\n\nI agree that ignore=off would be suboptimal, but the patch doesn't have that\nbut instead ignore_event_trigger=none for that case, which I personally don't\nthink carries the same issue.\n\n> - Being able to write a list of event triggers working would be much\n> more interesting than just individual elements.\n> - There may be an argument for negated patterns? Say,\n> a \"!sql_drop,!ddl_command_start\" would cause sql_drop and\n> ddl_command_start to be disabled with all the others enabled, and one\n> should not ne able to mix negated and non-negated patterns.\n\nI'm not convinced that it's in our interest to offer a GUC to configure the\ncluster by selectively turning off SQL features. The ones we have for planner\ntuning which is a different beast. At the very least it should be in a thread\ncovering that topic, as it might be a bit hidden here.\n\nThe use case envisioned here is to allow an admin to log in to a database with\na broken EVT without having to use single user mode. Basically, it should be a\nconvenient way of temporarily halting the execution of buggy code, not a\nvehicle for permanent cluster config (even though in light of the above para\nit's clear that it can be misused like that). Maybe there should be a log\nevent highlighting that the cluster is running with an EVT type ignored?\nAnd/or logging the names of the EVT's that otherwise would've been executed?\n\n> A few days before the end of the commit fest, perhaps you'd better\n> head towards having only an event_trigger = on | off or all | none and\n> consider expanding that later on? From what I get at the top of the\n> thread, this would satisfy the main use case you seemed to worry\n> about to begin with.\n\nIf there are concerns with any part of the patch at this point, and the\ncomments above indicate that, I'd say it's better to push this to the v17\ncycle.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 10:57:18 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 4:57 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > - Being able to write a list of event triggers working would be much\n> > more interesting than just individual elements.\n> > - There may be an argument for negated patterns? Say,\n> > a \"!sql_drop,!ddl_command_start\" would cause sql_drop and\n> > ddl_command_start to be disabled with all the others enabled, and one\n> > should not ne able to mix negated and non-negated patterns.\n>\n> I'm not convinced that it's in our interest to offer a GUC to configure the\n> cluster by selectively turning off SQL features. The ones we have for planner\n> tuning which is a different beast. At the very least it should be in a thread\n> covering that topic, as it might be a bit hidden here.\n\nBut, isn't that exactly what you're proposing?\n\nI mean if this was just event_triggers = on | off it would be exactly\nlike row_security and as far as I'm concerned there would be nothing\nto debate. But it sounded like you wanted something finer-grained that\ncould disable certain kinds of event triggers. That's also what\nMIchael is proposing, just with different syntax. In other words,\nwhere you would say ignore_event_triggers = sql_drop, he'd say\nevent_triggers = !sql_drop.\n\nMaybe we should back up and ask why we need more than \"on\" and \"off\".\nIf somebody is using this feature in any form more than very\noccasionally, they should really go home and reconsider their database\nschema.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 10:30:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Maybe we should back up and ask why we need more than \"on\" and \"off\".\n> If somebody is using this feature in any form more than very\n> occasionally, they should really go home and reconsider their database\n> schema.\n\n+1 ... this seems perhaps overdesigned.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Apr 2023 10:43:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 5 Apr 2023, at 16:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Apr 5, 2023 at 4:57 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> - Being able to write a list of event triggers working would be much\n>>> more interesting than just individual elements.\n>>> - There may be an argument for negated patterns? Say,\n>>> a \"!sql_drop,!ddl_command_start\" would cause sql_drop and\n>>> ddl_command_start to be disabled with all the others enabled, and one\n>>> should not ne able to mix negated and non-negated patterns.\n>> \n>> I'm not convinced that it's in our interest to offer a GUC to configure the\n>> cluster by selectively turning off SQL features. The ones we have for planner\n>> tuning which is a different beast. At the very least it should be in a thread\n>> covering that topic, as it might be a bit hidden here.\n> \n> But, isn't that exactly what you're proposing?\n\nYeah, but it didn't really strike me until after typing and sending that email.\n\n> I mean if this was just event_triggers = on | off it would be exactly\n> like row_security and as far as I'm concerned there would be nothing\n> to debate.\n\nWhich is what v1-v3 did until I changed it based on review input, but I agree\nwith the reasoning here so will revert back (with some internal changes too).\nMoving this to the next CF for another stab at it when the tree re-opens.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 20:43:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Wed, Apr 05, 2023 at 10:43:23AM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Maybe we should back up and ask why we need more than \"on\" and \"off\".\n>> If somebody is using this feature in any form more than very\n>> occasionally, they should really go home and reconsider their database\n>> schema.\n> \n> +1 ... this seems perhaps overdesigned.\n\nYes. If you begin with an \"on\"/\"off\" switch, it could always be\nextended later if someone makes a case for it, with a grammar like one\nI mentioned upthread, or even something else. If there is no strong\ncase for more than a boolean for now, simpler is better.\n--\nMichael",
"msg_date": "Thu, 6 Apr 2023 07:06:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 6 Apr 2023, at 00:06, Michael Paquier <michael@paquier.xyz> wrote:\n\n> If there is no strong\n> case for more than a boolean for now, simpler is better.\n\nThe attached version of the patch replaces it with a boolean flag for turning\noff all event triggers, and I also renamed it to the debug_xxx \"GUC namespace\"\nwhich seemed more appropriate.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 5 Sep 2023 14:12:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 8:12 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 6 Apr 2023, at 00:06, Michael Paquier <michael@paquier.xyz> wrote:\n> > If there is no strong\n> > case for more than a boolean for now, simpler is better.\n>\n> The attached version of the patch replaces it with a boolean flag for turning\n> off all event triggers, and I also renamed it to the debug_xxx \"GUC namespace\"\n> which seemed more appropriate.\n\nI don't care for the debug_xxx renaming, myself. I think that should\nbe reserved for things where we believe there's no reason to ever use\nit in production/real life, or for things whose purpose is to emit\ndebugging messages. Neither is the case here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:29:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 5 Sep 2023, at 17:29, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Sep 5, 2023 at 8:12 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> The attached version of the patch replaces it with a boolean flag for turning\n>> off all event triggers, and I also renamed it to the debug_xxx \"GUC namespace\"\n>> which seemed more appropriate.\n> \n> I don't care for the debug_xxx renaming, myself. I think that should\n> be reserved for things where we believe there's no reason to ever use\n> it in production/real life, or for things whose purpose is to emit\n> debugging messages. Neither is the case here.\n\nFair enough, how about disable_event_trigger instead as per the attached?\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 6 Sep 2023 10:50:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 4:50 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 5 Sep 2023, at 17:29, Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Sep 5, 2023 at 8:12 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> >> The attached version of the patch replaces it with a boolean flag for turning\n> >> off all event triggers, and I also renamed it to the debug_xxx \"GUC namespace\"\n> >> which seemed more appropriate.\n> >\n> > I don't care for the debug_xxx renaming, myself. I think that should\n> > be reserved for things where we believe there's no reason to ever use\n> > it in production/real life, or for things whose purpose is to emit\n> > debugging messages. Neither is the case here.\n>\n> Fair enough, how about disable_event_trigger instead as per the attached?\n\nI usually prefer to give things a positive sense, talking about\nwhether things are enabled rather than disabled. I'd do event_triggers\n= off | on, like we have for row_security. YMMV, though.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 10:22:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 6 Sep 2023, at 16:22, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Sep 6, 2023 at 4:50 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> Fair enough, how about disable_event_trigger instead as per the attached?\n> \n> I usually prefer to give things a positive sense, talking about\n> whether things are enabled rather than disabled. I'd do event_triggers\n> = off | on, like we have for row_security. YMMV, though.\n\nFair enough, I don't have strong opinions and I do agree that making this work\nlike row_security is a good thing for consistency. Done in the attached.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 6 Sep 2023 22:23:55 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Wed, Sep 06, 2023 at 10:23:55PM +0200, Daniel Gustafsson wrote:\n> > On 6 Sep 2023, at 16:22, Robert Haas <robertmhaas@gmail.com> wrote:\n>> I usually prefer to give things a positive sense, talking about\n>> whether things are enabled rather than disabled. I'd do event_triggers\n>> = off | on, like we have for row_security. YMMV, though.\n> \n> Fair enough, I don't have strong opinions and I do agree that making this work\n> like row_security is a good thing for consistency. Done in the attached.\n\nThis point has been raised a couple of months ago:\nhttps://www.postgresql.org/message-id/ZC0s%2BBRMqRupDInQ%40paquier.xyz\n\n+SET event_triggers = 'on';\n+CREATE POLICY pguc ON event_trigger_test USING (FALSE);\n+DROP POLICY pguc ON event_trigger_test;\n\nThis provides checks for the start, end and drop events. Shouldn't\ntable_rewrite also be covered?\n\n+ GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n\nI am a bit surprised by these two additions. Setting this GUC at\nfile-level can be useful, as is documenting it in the control file if\nit provides some control of how a statement behaves, no?\n\n+ Allow temporarily disabling execution of event triggers in order to\n+ troubleshoot and repair faulty event triggers. All event triggers will\n+ be disabled by setting it to <literal>true</literal>. Setting the value\n+ to <literal>false</literal> will not disable any event triggers, this\n+ is the default value. Only superusers and users with the appropriate\n+ <literal>SET</literal> privilege can change this setting.\n\nEvent triggers are disabled if setting this GUC to false, while true,\nthe default, allows event triggers. The values are reversed in this\ndescription.\n--\nMichael",
"msg_date": "Thu, 7 Sep 2023 14:57:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 1:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n> + GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n>\n> I am a bit surprised by these two additions. Setting this GUC at\n> file-level can be useful, as is documenting it in the control file if\n> it provides some control of how a statement behaves, no?\n\nYeah, I don't think these options should be used.\n\n> + Allow temporarily disabling execution of event triggers in order to\n> + troubleshoot and repair faulty event triggers. All event triggers will\n> + be disabled by setting it to <literal>true</literal>. Setting the value\n> + to <literal>false</literal> will not disable any event triggers, this\n> + is the default value. Only superusers and users with the appropriate\n> + <literal>SET</literal> privilege can change this setting.\n>\n> Event triggers are disabled if setting this GUC to false, while true,\n> the default, allows event triggers. The values are reversed in this\n> description.\n\nWoops.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 15:02:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 7 Sep 2023, at 21:02, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Sep 7, 2023 at 1:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> + GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE\n>> \n>> I am a bit surprised by these two additions. Setting this GUC at\n>> file-level can be useful, as is documenting it in the control file if\n>> it provides some control of how a statement behaves, no?\n> \n> Yeah, I don't think these options should be used.\n\nRemoved.\n\n>> + Allow temporarily disabling execution of event triggers in order to\n>> + troubleshoot and repair faulty event triggers. All event triggers will\n>> + be disabled by setting it to <literal>true</literal>. Setting the value\n>> + to <literal>false</literal> will not disable any event triggers, this\n>> + is the default value. Only superusers and users with the appropriate\n>> + <literal>SET</literal> privilege can change this setting.\n>> \n>> Event triggers are disabled if setting this GUC to false, while true,\n>> the default, allows event triggers. The values are reversed in this\n>> description.\n> \n> Woops.\n\nFixed.\n\nSince the main driver for this is to reduce the usage/need for single-user mode\nI also reworded the patch slightly. Instead of phrasing this as an alternative\nto single-user mode, I reversed it such that single-user mode is an alternative\nto this GUC.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 22 Sep 2023 17:29:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 05:29:01PM +0200, Daniel Gustafsson wrote:\n> Since the main driver for this is to reduce the usage/need for single-user mode\n> I also reworded the patch slightly. Instead of phrasing this as an alternative\n> to single-user mode, I reversed it such that single-user mode is an alternative\n> to this GUC.\n\nOkay.\n\n+ be disabled by setting it to <literal>false</literal>. Setting the value\n+ to <literal>true</literal> will not disable any event triggers, this\n\nThis uses a double negation. Perhaps just \"Setting this value to true\nallows all event triggers to run.\"\n\n003_check_guc.pl has detected a failure because event_triggers is\nmissing in postgresql.conf.sample while it is not marked with\nGUC_NOT_IN_SAMPLE anymore.\n\nKeeping the docs consistent with the sample file, I would suggest the\nattached on top of your v9.\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 08:35:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 25 Sep 2023, at 01:35, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Sep 22, 2023 at 05:29:01PM +0200, Daniel Gustafsson wrote:\n>> Since the main driver for this is to reduce the usage/need for single-user mode\n>> I also reworded the patch slightly. Instead of phrasing this as an alternative\n>> to single-user mode, I reversed it such that single-user mode is an alternative\n>> to this GUC.\n> \n> Okay.\n> \n> + be disabled by setting it to <literal>false</literal>. Setting the value\n> + to <literal>true</literal> will not disable any event triggers, this\n> \n> This uses a double negation. Perhaps just \"Setting this value to true\n> allows all event triggers to run.\"\n\nFair enough, although I used \"fire\" instead of \"run\" which is consistent with\nthe event trigger documentation.\n\n> 003_check_guc.pl has detected a failure because event_triggers is\n> missing in postgresql.conf.sample while it is not marked with\n> GUC_NOT_IN_SAMPLE anymore.\n> \n> Keeping the docs consistent with the sample file, I would suggest the\n> attached on top of your v9.\n\nAh, yes.\n\nThe attached v10 has the above to fixes.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 25 Sep 2023 09:19:35 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 09:19:35AM +0200, Daniel Gustafsson wrote:\n> Fair enough, although I used \"fire\" instead of \"run\" which is consistent with\n> the event trigger documentation.\n\nOkay by me.\n--\nMichael",
"msg_date": "Mon, 25 Sep 2023 16:50:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 25 Sep 2023, at 09:50, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Sep 25, 2023 at 09:19:35AM +0200, Daniel Gustafsson wrote:\n>> Fair enough, although I used \"fire\" instead of \"run\" which is consistent with\n>> the event trigger documentation.\n> \n> Okay by me.\n\nGreat, I'll go ahead and apply this version then. Thanks for review!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 09:52:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
},
{
"msg_contents": "> On 25 Sep 2023, at 09:52, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 25 Sep 2023, at 09:50, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Mon, Sep 25, 2023 at 09:19:35AM +0200, Daniel Gustafsson wrote:\n>>> Fair enough, although I used \"fire\" instead of \"run\" which is consistent with\n>>> the event trigger documentation.\n>> \n>> Okay by me.\n> \n> Great, I'll go ahead and apply this version then. Thanks for review!\n\nAnd applied, closing the CF entry.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:22:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: GUC for temporarily disabling event triggers"
}
] |
[
{
"msg_contents": "Hi,\n\nA replication slot can be lost when a subscriber is not able to catch up\nwith the load on the primary and the WAL to catch up exceeds\nmax_slot_wal_keep_size. When this happens, target has to be reseeded\n(pg_dump) from the scratch and this can take longer. I am investigating the\noptions to revive a lost slot. With the attached patch and copying the WAL\nfiles from the archive to pg_wal directory I was able to revive the lost\nslot. I also verified that a lost slot doesn't let vacuum cleanup the\ncatalog tuples deleted by any later transaction than catalog_xmin. One side\neffect of this approach is that the checkpointer creating the .ready files\ncorresponds to the copied wal files in the archive_status folder. Archive\ncommand has to handle this case. At the same time, checkpointer can\npotentially delete the file again before the subscriber consumes the file\nagain. In the proposed patch, I am not setting restart_lsn\nto InvalidXLogRecPtr but instead relying on invalidated_at field to tell if\nthe slot is lost. Is the intent of setting restart_lsn to InvalidXLogRecPtr\nwas to disallow reviving the slot?\n\nIf overall direction seems ok, I would continue on the work to revive the\nslot by copying the wal files from the archive. Appreciate your feedback.\n\nThanks,\nSirisha",
"msg_date": "Fri, 4 Nov 2022 01:10:39 -0700",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reviving lost replication slots"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> A replication slot can be lost when a subscriber is not able to catch up with the load on the primary and the WAL to catch up exceeds max_slot_wal_keep_size. When this happens, target has to be reseeded (pg_dump) from the scratch and this can take longer. I am investigating the options to revive a lost slot.\n>\n\nWhy in the first place one has to set max_slot_wal_keep_size if they\ncare for WAL more than that? If you have a case where you want to\nhandle this case for some particular slot (where you are okay with the\ninvalidation of other slots exceeding max_slot_wal_keep_size) then the\nother possibility could be to have a similar variable at the slot\nlevel but not sure if that is a good idea because you haven't\npresented any such case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 5 Nov 2022 11:32:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "Hi Amit,\n\nThanks for your comments!\n\nOn Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> >\n> > A replication slot can be lost when a subscriber is not able to catch up\n> with the load on the primary and the WAL to catch up exceeds\n> max_slot_wal_keep_size. When this happens, target has to be reseeded\n> (pg_dump) from the scratch and this can take longer. I am investigating the\n> options to revive a lost slot.\n> >\n>\n> Why in the first place one has to set max_slot_wal_keep_size if they\n> care for WAL more than that?\n\n Disk full is a typical use where we can't wait until the logical slots to\ncatch up before truncating the log.\n\nIf you have a case where you want to\n> handle this case for some particular slot (where you are okay with the\n> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n> other possibility could be to have a similar variable at the slot\n> level but not sure if that is a good idea because you haven't\n> presented any such case.\n>\nIIUC, ability to fetch WAL from the archive as a fall back mechanism should\nautomatically take care of all the lost slots. Do you see a need to take\ncare of a specific slot? If the idea is not to download the wal files in\nthe pg_wal directory, they can be placed in a slot specific folder\n(data/pg_replslot/<slot>/) until they are needed while decoding and can be\nremoved.\n\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nHi Amit,Thanks for your comments!On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> A replication slot can be lost when a subscriber is not able to catch up with the load on the primary and the WAL to catch up exceeds max_slot_wal_keep_size. When this happens, target has to be reseeded (pg_dump) from the scratch and this can take longer. I am investigating the options to revive a lost slot.\n>\n\nWhy in the first place one has to set max_slot_wal_keep_size if they\ncare for WAL more than that? Disk full is a typical use where we can't wait until the logical slots to catch up before truncating the log. If you have a case where you want to\nhandle this case for some particular slot (where you are okay with the\ninvalidation of other slots exceeding max_slot_wal_keep_size) then the\nother possibility could be to have a similar variable at the slot\nlevel but not sure if that is a good idea because you haven't\npresented any such case.IIUC, ability to fetch WAL from the archive as a fall back mechanism should automatically take care of all the lost slots. Do you see a need to take care of a specific slot? If the idea is not to download the wal files in the pg_wal directory, they can be placed in a slot specific folder (data/pg_replslot/<slot>/) until they are needed while decoding and can be removed. \n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 7 Nov 2022 22:37:49 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 12:08 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n>> <sirichamarthi22@gmail.com> wrote:\n>> >\n>> > A replication slot can be lost when a subscriber is not able to catch up with the load on the primary and the WAL to catch up exceeds max_slot_wal_keep_size. When this happens, target has to be reseeded (pg_dump) from the scratch and this can take longer. I am investigating the options to revive a lost slot.\n>> >\n>>\n>> Why in the first place one has to set max_slot_wal_keep_size if they\n>> care for WAL more than that?\n>\n> Disk full is a typical use where we can't wait until the logical slots to catch up before truncating the log.\n\nIf the max_slot_wal_keep_size is set appropriately and the replication\nlag is monitored properly along with some automatic actions such as\nreplacing/rebuilding the standbys or subscribers (which may not be\neasy and cheap though), the chances of hitting the \"lost replication\"\nproblem becomes less, but not zero always.\n\n>> If you have a case where you want to\n>> handle this case for some particular slot (where you are okay with the\n>> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n>> other possibility could be to have a similar variable at the slot\n>> level but not sure if that is a good idea because you haven't\n>> presented any such case.\n>\n> IIUC, ability to fetch WAL from the archive as a fall back mechanism should automatically take care of all the lost slots. Do you see a need to take care of a specific slot? If the idea is not to download the wal files in the pg_wal directory, they can be placed in a slot specific folder (data/pg_replslot/<slot>/) until they are needed while decoding and can be removed.\n\nIs the idea here the core copying back the WAL files from the archive?\nIf yes, I think it is not something the core needs to do. This very\nwell fits the job of an extension or an external module that revives\nthe lost replication slots by copying WAL files from archive location.\n\nHaving said above, what's the best way to revive a lost replication\nslot today? Any automated way exists today? It seems like\npg_replication_slot_advance() doesn't do anything for the\ninvalidated/lost slots.\n\nIf it's a streaming replication slot, the standby will anyway jump to\narchive mode ignoring the replication slot and the slot will never be\nusable again unless somebody creates a new replication slot and\nprovides it to the standby for reuse.\nIf it's a logical replication slot, the subscriber will start to\ndiverge from the publisher and the slot will have to be revived\nmanually i.e. created again.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 8 Nov 2022 12:47:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 12:08 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n>> <sirichamarthi22@gmail.com> wrote:\n>> >\n>> > A replication slot can be lost when a subscriber is not able to catch up with the load on the primary and the WAL to catch up exceeds max_slot_wal_keep_size. When this happens, target has to be reseeded (pg_dump) from the scratch and this can take longer. I am investigating the options to revive a lost slot.\n>> >\n>>\n>> Why in the first place one has to set max_slot_wal_keep_size if they\n>> care for WAL more than that?\n>\n> Disk full is a typical use where we can't wait until the logical slots to catch up before truncating the log.\n>\n\nIdeally, in such a case the subscriber should fall back to the\nphysical standby of the publisher but unfortunately, we don't yet have\na functionality where subscribers can continue logical replication\nfrom physical standby. Do you think if we had such functionality it\nwould serve our purpose?\n\n>> If you have a case where you want to\n>> handle this case for some particular slot (where you are okay with the\n>> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n>> other possibility could be to have a similar variable at the slot\n>> level but not sure if that is a good idea because you haven't\n>> presented any such case.\n>\n> IIUC, ability to fetch WAL from the archive as a fall back mechanism should automatically take care of all the lost slots. Do you see a need to take care of a specific slot?\n>\n\nNo, I was just trying to see if your use case can be addressed in some\nother way. BTW, won't copying the WAL again back from archive can lead\nto a disk full situation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Nov 2022 15:06:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Tue, Nov 8, 2022 at 1:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Nov 8, 2022 at 12:08 PM sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> >\n> > On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n> >> <sirichamarthi22@gmail.com> wrote:\n> >> >\n> >> > A replication slot can be lost when a subscriber is not able to catch\n> up with the load on the primary and the WAL to catch up exceeds\n> max_slot_wal_keep_size. When this happens, target has to be reseeded\n> (pg_dump) from the scratch and this can take longer. I am investigating the\n> options to revive a lost slot.\n> >> >\n> >>\n> >> Why in the first place one has to set max_slot_wal_keep_size if they\n> >> care for WAL more than that?\n> >\n> > Disk full is a typical use where we can't wait until the logical slots\n> to catch up before truncating the log.\n> >\n>\n> Ideally, in such a case the subscriber should fall back to the\n> physical standby of the publisher but unfortunately, we don't yet have\n> a functionality where subscribers can continue logical replication\n> from physical standby. Do you think if we had such functionality it\n> would serve our purpose?\n>\n\n Don't think streaming from standby helps as the disk layout is expected\nto remain the same on physical standby and primary.\n\n\n\n> >> If you have a case where you want to\n> >> handle this case for some particular slot (where you are okay with the\n> >> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n> >> other possibility could be to have a similar variable at the slot\n> >> level but not sure if that is a good idea because you haven't\n> >> presented any such case.\n> >\n> > IIUC, ability to fetch WAL from the archive as a fall back mechanism\n> should automatically take care of all the lost slots. Do you see a need to\n> take care of a specific slot?\n> >\n>\n> No, I was just trying to see if your use case can be addressed in some\n> other way. BTW, won't copying the WAL again back from archive can lead\n> to a disk full situation.\n>\nThe idea is to download the WAL from archive on demand as the slot requires\nthem and throw away the segment once processed.\n\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nOn Tue, Nov 8, 2022 at 1:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Nov 8, 2022 at 12:08 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n>> <sirichamarthi22@gmail.com> wrote:\n>> >\n>> > A replication slot can be lost when a subscriber is not able to catch up with the load on the primary and the WAL to catch up exceeds max_slot_wal_keep_size. When this happens, target has to be reseeded (pg_dump) from the scratch and this can take longer. I am investigating the options to revive a lost slot.\n>> >\n>>\n>> Why in the first place one has to set max_slot_wal_keep_size if they\n>> care for WAL more than that?\n>\n> Disk full is a typical use where we can't wait until the logical slots to catch up before truncating the log.\n>\n\nIdeally, in such a case the subscriber should fall back to the\nphysical standby of the publisher but unfortunately, we don't yet have\na functionality where subscribers can continue logical replication\nfrom physical standby. Do you think if we had such functionality it\nwould serve our purpose? Don't think streaming from standby helps as the disk layout is expected to remain the same on physical standby and primary. \n>> If you have a case where you want to\n>> handle this case for some particular slot (where you are okay with the\n>> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n>> other possibility could be to have a similar variable at the slot\n>> level but not sure if that is a good idea because you haven't\n>> presented any such case.\n>\n> IIUC, ability to fetch WAL from the archive as a fall back mechanism should automatically take care of all the lost slots. Do you see a need to take care of a specific slot?\n>\n\nNo, I was just trying to see if your use case can be addressed in some\nother way. BTW, won't copying the WAL again back from archive can lead\nto a disk full situation.The idea is to download the WAL from archive on demand as the slot requires them and throw away the segment once processed. \n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 8 Nov 2022 19:26:45 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Mon, Nov 7, 2022 at 11:17 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Tue, Nov 8, 2022 at 12:08 PM sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> >\n> > On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n> >> <sirichamarthi22@gmail.com> wrote:\n> >> >\n> >> > A replication slot can be lost when a subscriber is not able to catch\n> up with the load on the primary and the WAL to catch up exceeds\n> max_slot_wal_keep_size. When this happens, target has to be reseeded\n> (pg_dump) from the scratch and this can take longer. I am investigating the\n> options to revive a lost slot.\n> >> >\n> >>\n> >> Why in the first place one has to set max_slot_wal_keep_size if they\n> >> care for WAL more than that?\n> >\n> > Disk full is a typical use where we can't wait until the logical slots\n> to catch up before truncating the log.\n>\n> If the max_slot_wal_keep_size is set appropriately and the replication\n> lag is monitored properly along with some automatic actions such as\n> replacing/rebuilding the standbys or subscribers (which may not be\n> easy and cheap though), the chances of hitting the \"lost replication\"\n> problem becomes less, but not zero always.\n>\n\npg_dump and pg_restore can take several hours to days on a large database.\nKeeping the WAL in the pg_wal folder (faster, smaller and costly disks?) is\nnot always an option.\n\n\n>\n> >> If you have a case where you want to\n> >> handle this case for some particular slot (where you are okay with the\n> >> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n> >> other possibility could be to have a similar variable at the slot\n> >> level but not sure if that is a good idea because you haven't\n> >> presented any such case.\n> >\n> > IIUC, ability to fetch WAL from the archive as a fall back mechanism\n> should automatically take care of all the lost slots. Do you see a need to\n> take care of a specific slot? If the idea is not to download the wal files\n> in the pg_wal directory, they can be placed in a slot specific folder\n> (data/pg_replslot/<slot>/) until they are needed while decoding and can be\n> removed.\n>\n> Is the idea here the core copying back the WAL files from the archive?\n> If yes, I think it is not something the core needs to do. This very\n> well fits the job of an extension or an external module that revives\n> the lost replication slots by copying WAL files from archive location.\n>\n\nThe current code is throwing an error that the slot is lost because the\nrestart_lsn is set to invalid LSN when the WAL is truncated by\ncheckpointer. In order to build an external service that can revive a lost\nslot, at the minimum we needed the patch attached.\n\n\n>\n> Having said above, what's the best way to revive a lost replication\n> slot today? Any automated way exists today? It seems like\n> pg_replication_slot_advance() doesn't do anything for the\n> invalidated/lost slots.\n>\n\n If the WAL is available in the pg_wal directory, the replication stream\nresumes normally when the client connects with the patch I posted.\n\n\n>\n> If it's a streaming replication slot, the standby will anyway jump to\n> archive mode ignoring the replication slot and the slot will never be\n> usable again unless somebody creates a new replication slot and\n> provides it to the standby for reuse.\n> If it's a logical replication slot, the subscriber will start to\n> diverge from the publisher and the slot will have to be revived\n> manually i.e. created again.\n>\n\nPhysical slots can be revived with standby downloading the WAL from the\narchive directly. This patch is helpful for the logical slots.\n\n\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\nOn Mon, Nov 7, 2022 at 11:17 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Tue, Nov 8, 2022 at 12:08 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Fri, Nov 4, 2022 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n>> <sirichamarthi22@gmail.com> wrote:\n>> >\n>> > A replication slot can be lost when a subscriber is not able to catch up with the load on the primary and the WAL to catch up exceeds max_slot_wal_keep_size. When this happens, target has to be reseeded (pg_dump) from the scratch and this can take longer. I am investigating the options to revive a lost slot.\n>> >\n>>\n>> Why in the first place one has to set max_slot_wal_keep_size if they\n>> care for WAL more than that?\n>\n> Disk full is a typical use where we can't wait until the logical slots to catch up before truncating the log.\n\nIf the max_slot_wal_keep_size is set appropriately and the replication\nlag is monitored properly along with some automatic actions such as\nreplacing/rebuilding the standbys or subscribers (which may not be\neasy and cheap though), the chances of hitting the \"lost replication\"\nproblem becomes less, but not zero always.pg_dump and pg_restore can take several hours to days on a large database. Keeping the WAL in the pg_wal folder (faster, smaller and costly disks?) is not always an option. \n\n>> If you have a case where you want to\n>> handle this case for some particular slot (where you are okay with the\n>> invalidation of other slots exceeding max_slot_wal_keep_size) then the\n>> other possibility could be to have a similar variable at the slot\n>> level but not sure if that is a good idea because you haven't\n>> presented any such case.\n>\n> IIUC, ability to fetch WAL from the archive as a fall back mechanism should automatically take care of all the lost slots. Do you see a need to take care of a specific slot? If the idea is not to download the wal files in the pg_wal directory, they can be placed in a slot specific folder (data/pg_replslot/<slot>/) until they are needed while decoding and can be removed.\n\nIs the idea here the core copying back the WAL files from the archive?\nIf yes, I think it is not something the core needs to do. This very\nwell fits the job of an extension or an external module that revives\nthe lost replication slots by copying WAL files from archive location. The current code is throwing an error that the slot is lost because the restart_lsn is set to invalid LSN when the WAL is truncated by checkpointer. In order to build an external service that can revive a lost slot, at the minimum we needed the patch attached. \n\nHaving said above, what's the best way to revive a lost replication\nslot today? Any automated way exists today? It seems like\npg_replication_slot_advance() doesn't do anything for the\ninvalidated/lost slots. If the WAL is available in the pg_wal directory, the replication stream resumes normally when the client connects with the patch I posted. \n\nIf it's a streaming replication slot, the standby will anyway jump to\narchive mode ignoring the replication slot and the slot will never be\nusable again unless somebody creates a new replication slot and\nprovides it to the standby for reuse.\nIf it's a logical replication slot, the subscriber will start to\ndiverge from the publisher and the slot will have to be revived\nmanually i.e. created again.Physical slots can be revived with standby downloading the WAL from the archive directly. This patch is helpful for the logical slots. \n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 8 Nov 2022 19:39:58 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "I don't think walsenders fetching segment from archive is totally\nstupid. With that feature, we can use fast and expensive but small\nstorage for pg_wal, while avoiding replciation from dying even in\nemergency.\n\nAt Tue, 8 Nov 2022 19:39:58 -0800, sirisha chamarthi <sirichamarthi22@gmail.com> wrote in \n> > If it's a streaming replication slot, the standby will anyway jump to\n> > archive mode ignoring the replication slot and the slot will never be\n> > usable again unless somebody creates a new replication slot and\n> > provides it to the standby for reuse.\n> > If it's a logical replication slot, the subscriber will start to\n> > diverge from the publisher and the slot will have to be revived\n> > manually i.e. created again.\n> >\n> \n> Physical slots can be revived with standby downloading the WAL from the\n> archive directly. This patch is helpful for the logical slots.\n\nHowever, supposing that WalSndSegmentOpen() fetches segments from\narchive as the fallback and that succeeds, the slot can survive\nmissing WAL in pg_wal in the first place. So this patch doesn't seem\nto be needed for the purpose.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 09 Nov 2022 17:32:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 2:02 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I don't think walsenders fetching segment from archive is totally\n> stupid. With that feature, we can use fast and expensive but small\n> storage for pg_wal, while avoiding replciation from dying even in\n> emergency.\n\nIt seems like a useful feature to have at least as an option and it\nsaves a lot of work - failovers, expensive rebuilds of\nstandbys/subscribers, manual interventions etc.\n\nIf you're saying that even the walsedners serving logical replication\nsubscribers would go fetch from the archive location for the removed\nWAL files, it mandates enabling archiving on the subscribers. And we\nknow that the archiving is not cheap and has its own advantages and\ndisadvantages, so the feature may or may not help.\nIf you're saying that only the walsedners serving streaming\nreplication standbys would go fetch from the archive location for the\nremoved WAL files, it's easy to implement, however it is not a\ncomplete feature and doesn't solve the problem for logical\nreplication.\nWith the feature, it'll be something like 'you, as primary/publisher,\narchive the WAL files and when you don't have them, you'll restore\nthem', it may not sound elegant, however, it can solve the lost\nreplication slots problem.\nAnd, the cost of restoring WAL files from the archive might further\nslow down the replication thus increasing the replication lag.\nAnd, one need to think, how many such WAL files are restored and kept,\nwhether they'll be kept in pg_wal or some other directory, how will\nthe disk full, fetching too old or too many WAL files for replication\nslots lagging behind, removal of unnecessary WAL files etc. be\nhandled.\n\nI'm not sure about other implications at this point of time.\n\nPerhaps, implementing this feature as a core/external extension by\nintroducing segment_open() or other necessary hooks might be worth it.\n\nIf implemented in some way, I think the scope of replication slot\ninvalidation/max_slot_wal_keep_size feature gets reduced or it can be\nremoved completely, no?\n\n> However, supposing that WalSndSegmentOpen() fetches segments from\n> archive as the fallback and that succeeds, the slot can survive\n> missing WAL in pg_wal in the first place. So this patch doesn't seem\n> to be needed for the purpose.\n\nThat is a simple solution one can think of and provide for streaming\nreplication standbys, however, is it worth implementing it in the core\nas explained above?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 9 Nov 2022 15:00:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 3:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Nov 9, 2022 at 2:02 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > I don't think walsenders fetching segment from archive is totally\n> > stupid. With that feature, we can use fast and expensive but small\n> > storage for pg_wal, while avoiding replciation from dying even in\n> > emergency.\n>\n> It seems like a useful feature to have at least as an option and it\n> saves a lot of work - failovers, expensive rebuilds of\n> standbys/subscribers, manual interventions etc.\n>\n> If you're saying that even the walsedners serving logical replication\n> subscribers would go fetch from the archive location for the removed\n> WAL files, it mandates enabling archiving on the subscribers.\n>\n\nWhy archiving on subscribers is required? Won't it be sufficient if\nthat is enabled on the publisher where we have walsender?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Nov 2022 15:53:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 3:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 9, 2022 at 3:00 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Nov 9, 2022 at 2:02 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > I don't think walsenders fetching segment from archive is totally\n> > > stupid. With that feature, we can use fast and expensive but small\n> > > storage for pg_wal, while avoiding replciation from dying even in\n> > > emergency.\n> >\n> > It seems like a useful feature to have at least as an option and it\n> > saves a lot of work - failovers, expensive rebuilds of\n> > standbys/subscribers, manual interventions etc.\n> >\n> > If you're saying that even the walsedners serving logical replication\n> > subscribers would go fetch from the archive location for the removed\n> > WAL files, it mandates enabling archiving on the subscribers.\n> >\n>\n> Why archiving on subscribers is required? Won't it be sufficient if\n> that is enabled on the publisher where we have walsender?\n\nUgh. A typo. I meant it mandates enabling archiving on publishers.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 9 Nov 2022 15:55:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n Is the intent of setting restart_lsn to InvalidXLogRecPtr was to\ndisallow reviving the slot?\n>\n\nI think the intent is to compute the correct value for\nreplicationSlotMinLSN as we use restart_lsn for it and using the\ninvalidated slot's restart_lsn value for it doesn't make sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Nov 2022 16:07:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 2:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n> <sirichamarthi22@gmail.com> wrote:\n> >\n> Is the intent of setting restart_lsn to InvalidXLogRecPtr was to\n> disallow reviving the slot?\n> >\n>\n> I think the intent is to compute the correct value for\n> replicationSlotMinLSN as we use restart_lsn for it and using the\n> invalidated slot's restart_lsn value for it doesn't make sense.\n>\n\n Correct. If a slot is invalidated (lost), then shouldn't we ignore the\nslot from computing the catalog_xmin? I don't see it being set to\nInvalidTransactionId in ReplicationSlotsComputeRequiredXmin. Attached a\nsmall patch to address this and the output after the patch is as shown\nbelow.\n\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database | temporary |\nactive | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase\n-----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n s2 | test_decoding | logical | 5 | postgres | f | f\n | | | 771 | 0/30466368 | 0/304663A0\n | reserved | 28903824 | f\n(1 row)\n\npostgres=# create table t2(c int, c1 char(100));\nCREATE TABLE\npostgres=# drop table t2;\nDROP TABLE\npostgres=# vacuum pg_class;\nVACUUM\npostgres=# select n_dead_tup from pg_stat_all_tables where relname =\n'pg_class';\n n_dead_tup\n------------\n 2\n(1 row)\n\npostgres=# select * from pg_stat_replication;\n pid | usesysid | usename | application_name | client_addr |\nclient_hostname | client_port | backend_start | backend_xmin | state |\nsent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag |\nreplay_lag | sync_pri\nority | sync_state | reply_time\n-----+----------+---------+------------------+-------------+-----------------+-------------+---------------+--------------+-------+----------+-----------+-----------+------------+-----------+-----------+------------+---------\n------+------------+------------\n(0 rows)\n\npostgres=# insert into t1 select * from t1;\nINSERT 0 2097152\npostgres=# checkpoint;\nCHECKPOINT\npostgres=# select * from pg_replication_slots;\n slot_name | plugin | slot_type | datoid | database | temporary |\nactive | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | two_phase\n-----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+-----------\n s2 | test_decoding | logical | 5 | postgres | f | f\n | | | 771 | | 0/304663A0\n | lost | | f\n(1 row)\n\npostgres=# vacuum pg_class;\nVACUUM\npostgres=# select n_dead_tup from pg_stat_all_tables where relname =\n'pg_class';\n n_dead_tup\n------------\n 0\n(1 row)\n\n\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nOn Wed, Nov 9, 2022 at 2:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n Is the intent of setting restart_lsn to InvalidXLogRecPtr was to\ndisallow reviving the slot?\n>\n\nI think the intent is to compute the correct value for\nreplicationSlotMinLSN as we use restart_lsn for it and using the\ninvalidated slot's restart_lsn value for it doesn't make sense. Correct. If a slot is invalidated (lost), then shouldn't we ignore the slot from computing the catalog_xmin? I don't see it being set to InvalidTransactionId in ReplicationSlotsComputeRequiredXmin. Attached a small patch to address this and the output after the patch is as shown below.postgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s2 | test_decoding | logical | 5 | postgres | f | f | | | 771 | 0/30466368 | 0/304663A0 | reserved | 28903824 | f(1 row)postgres=# create table t2(c int, c1 char(100));CREATE TABLEpostgres=# drop table t2;DROP TABLEpostgres=# vacuum pg_class;VACUUMpostgres=# select n_dead_tup from pg_stat_all_tables where relname = 'pg_class'; n_dead_tup ------------ 2(1 row)postgres=# select * from pg_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state | reply_time -----+----------+---------+------------------+-------------+-----------------+-------------+---------------+--------------+-------+----------+-----------+-----------+------------+-----------+-----------+------------+---------------+------------+------------(0 rows)postgres=# insert into t1 select * from t1;INSERT 0 2097152postgres=# checkpoint;CHECKPOINTpostgres=# select * from pg_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn | wal_status | safe_wal_size | two_phase -----------+---------------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+----------- s2 | test_decoding | logical | 5 | postgres | f | f | | | 771 | | 0/304663A0 | lost | | f(1 row)postgres=# vacuum pg_class;VACUUMpostgres=# select n_dead_tup from pg_stat_all_tables where relname = 'pg_class'; n_dead_tup ------------ 0(1 row) \n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 10 Nov 2022 02:37:52 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 12:32 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> I don't think walsenders fetching segment from archive is totally\n> stupid. With that feature, we can use fast and expensive but small\n> storage for pg_wal, while avoiding replciation from dying even in\n> emergency.\n>\n\nThanks! If there is a general agreement on this in this forum, I would like\nto start working on this patch,\n\n\n>\n> At Tue, 8 Nov 2022 19:39:58 -0800, sirisha chamarthi <\n> sirichamarthi22@gmail.com> wrote in\n> > > If it's a streaming replication slot, the standby will anyway jump to\n> > > archive mode ignoring the replication slot and the slot will never be\n> > > usable again unless somebody creates a new replication slot and\n> > > provides it to the standby for reuse.\n> > > If it's a logical replication slot, the subscriber will start to\n> > > diverge from the publisher and the slot will have to be revived\n> > > manually i.e. created again.\n> > >\n> >\n> > Physical slots can be revived with standby downloading the WAL from the\n> > archive directly. This patch is helpful for the logical slots.\n>\n> However, supposing that WalSndSegmentOpen() fetches segments from\n> archive as the fallback and that succeeds, the slot can survive\n> missing WAL in pg_wal in the first place. So this patch doesn't seem\n> to be needed for the purpose.\n>\n\nAgree on this. If we add the proposed support, we don't need this patch.\n\n\n>\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nOn Wed, Nov 9, 2022 at 12:32 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:I don't think walsenders fetching segment from archive is totally\nstupid. With that feature, we can use fast and expensive but small\nstorage for pg_wal, while avoiding replciation from dying even in\nemergency.Thanks! If there is a general agreement on this in this forum, I would like to start working on this patch, \n\nAt Tue, 8 Nov 2022 19:39:58 -0800, sirisha chamarthi <sirichamarthi22@gmail.com> wrote in \n> > If it's a streaming replication slot, the standby will anyway jump to\n> > archive mode ignoring the replication slot and the slot will never be\n> > usable again unless somebody creates a new replication slot and\n> > provides it to the standby for reuse.\n> > If it's a logical replication slot, the subscriber will start to\n> > diverge from the publisher and the slot will have to be revived\n> > manually i.e. created again.\n> >\n> \n> Physical slots can be revived with standby downloading the WAL from the\n> archive directly. This patch is helpful for the logical slots.\n\nHowever, supposing that WalSndSegmentOpen() fetches segments from\narchive as the fallback and that succeeds, the slot can survive\nmissing WAL in pg_wal in the first place. So this patch doesn't seem\nto be needed for the purpose.Agree on this. If we add the proposed support, we don't need this patch. \n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 10 Nov 2022 02:42:50 -0800",
"msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 4:07 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Wed, Nov 9, 2022 at 2:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Nov 4, 2022 at 1:40 PM sirisha chamarthi\n>> <sirichamarthi22@gmail.com> wrote:\n>> >\n>> Is the intent of setting restart_lsn to InvalidXLogRecPtr was to\n>> disallow reviving the slot?\n>> >\n>>\n>> I think the intent is to compute the correct value for\n>> replicationSlotMinLSN as we use restart_lsn for it and using the\n>> invalidated slot's restart_lsn value for it doesn't make sense.\n>\n>\n> Correct. If a slot is invalidated (lost), then shouldn't we ignore the slot from computing the catalog_xmin? I don't see it being set to InvalidTransactionId in ReplicationSlotsComputeRequiredXmin. Attached a small patch to address this and the output after the patch is as shown below.\n>\n\nI think you forgot to attach the patch. However, I suggest you start a\nseparate thread for this because the patch you are talking about here\nseems to be for an existing problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 11 Nov 2022 11:06:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 4:12 PM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> On Wed, Nov 9, 2022 at 12:32 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>\n>> I don't think walsenders fetching segment from archive is totally\n>> stupid. With that feature, we can use fast and expensive but small\n>> storage for pg_wal, while avoiding replciation from dying even in\n>> emergency.\n>\n> Thanks! If there is a general agreement on this in this forum, I would like to start working on this patch,\n\nI think starting with establishing/summarizing the problem, design\napproaches, implications etc. is a better idea than a patch. It might\ninvite more thoughts from the hackers.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 11 Nov 2022 11:58:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reviving lost replication slots"
}
] |
[
{
"msg_contents": "Over in\nhttps://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com\nJustin\nPryzby suggested that psql might need the ability to capture the shell exit\ncode.\n\nThis is a POC patch that does that, but doesn't touch on the ON_ERROR_STOP\nstuff.\n\nI've added some very rudimentary tests, but haven't touched the\ndocumentation, because I strongly suspect that someone will suggest a\nbetter name for the variable.\n\nBut basically, it works like this\n\n-- SHELL_EXIT_CODE is undefined\n\\echo :SHELL_EXIT_CODE\n:SHELL_EXIT_CODE\n-- bad \\!\n\\! borp\nsh: line 1: borp: command not found\n\\echo :SHELL_EXIT_CODE\n32512\n-- bad backtick\n\\set var `borp`\nsh: line 1: borp: command not found\n\\echo :SHELL_EXIT_CODE\n127\n-- good \\!\n\\! true\n\\echo :SHELL_EXIT_CODE\n0\n-- play with exit codes\n\\! exit 4\n\\echo :SHELL_EXIT_CODE\n1024\n\\set var `exit 3`\n\\echo :SHELL_EXIT_CODE\n3\n\n\nFeedback welcome.",
"msg_date": "Fri, 4 Nov 2022 05:08:31 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Oops, that sample output was from a previous run, should have been:\n\n-- SHELL_EXIT_CODE is undefined\n\\echo :SHELL_EXIT_CODE\n:SHELL_EXIT_CODE\n-- bad \\!\n\\! borp\nsh: line 1: borp: command not found\n\\echo :SHELL_EXIT_CODE\n127\n-- bad backtick\n\\set var `borp`\nsh: line 1: borp: command not found\n\\echo :SHELL_EXIT_CODE\n127\n-- good \\!\n\\! true\n\\echo :SHELL_EXIT_CODE\n0\n-- play with exit codes\n\\! exit 4\n\\echo :SHELL_EXIT_CODE\n4\n\\set var `exit 3`\n\\echo :SHELL_EXIT_CODE\n3\n\n\nOn Fri, Nov 4, 2022 at 5:08 AM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n>\n> Over in\n> https://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com Justin\n> Pryzby suggested that psql might need the ability to capture the shell exit\n> code.\n>\n> This is a POC patch that does that, but doesn't touch on the ON_ERROR_STOP\n> stuff.\n>\n> I've added some very rudimentary tests, but haven't touched the\n> documentation, because I strongly suspect that someone will suggest a\n> better name for the variable.\n>\n> But basically, it works like this\n>\n> -- SHELL_EXIT_CODE is undefined\n> \\echo :SHELL_EXIT_CODE\n> :SHELL_EXIT_CODE\n> -- bad \\!\n> \\! borp\n> sh: line 1: borp: command not found\n> \\echo :SHELL_EXIT_CODE\n> 32512\n> -- bad backtick\n> \\set var `borp`\n> sh: line 1: borp: command not found\n> \\echo :SHELL_EXIT_CODE\n> 127\n> -- good \\!\n> \\! true\n> \\echo :SHELL_EXIT_CODE\n> 0\n> -- play with exit codes\n> \\! exit 4\n> \\echo :SHELL_EXIT_CODE\n> 1024\n> \\set var `exit 3`\n> \\echo :SHELL_EXIT_CODE\n> 3\n>\n>\n> Feedback welcome.\n>\n\nOops, that sample output was from a previous run, should have been:-- SHELL_EXIT_CODE is undefined\\echo :SHELL_EXIT_CODE:SHELL_EXIT_CODE-- bad \\!\\! borpsh: line 1: borp: command not found\\echo :SHELL_EXIT_CODE127-- bad backtick\\set var `borp`sh: line 1: borp: command not found\\echo :SHELL_EXIT_CODE127-- good \\!\\! true\\echo :SHELL_EXIT_CODE0-- play with exit codes\\! exit 4\\echo :SHELL_EXIT_CODE4\\set var `exit 3`\\echo :SHELL_EXIT_CODE3On Fri, Nov 4, 2022 at 5:08 AM Corey Huinker <corey.huinker@gmail.com> wrote:Over in https://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com Justin Pryzby suggested that psql might need the ability to capture the shell exit code.This is a POC patch that does that, but doesn't touch on the ON_ERROR_STOP stuff.I've added some very rudimentary tests, but haven't touched the documentation, because I strongly suspect that someone will suggest a better name for the variable.But basically, it works like this-- SHELL_EXIT_CODE is undefined\\echo :SHELL_EXIT_CODE:SHELL_EXIT_CODE-- bad \\!\\! borpsh: line 1: borp: command not found\\echo :SHELL_EXIT_CODE32512-- bad backtick\\set var `borp`sh: line 1: borp: command not found\\echo :SHELL_EXIT_CODE127-- good \\!\\! true\\echo :SHELL_EXIT_CODE0-- play with exit codes\\! exit 4\\echo :SHELL_EXIT_CODE1024\\set var `exit 3`\\echo :SHELL_EXIT_CODE3Feedback welcome.",
"msg_date": "Fri, 4 Nov 2022 05:23:56 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Rebased. Still waiting on feedback before working on documentation.\n\nOn Fri, Nov 4, 2022 at 5:23 AM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> Oops, that sample output was from a previous run, should have been:\n>\n> -- SHELL_EXIT_CODE is undefined\n> \\echo :SHELL_EXIT_CODE\n> :SHELL_EXIT_CODE\n> -- bad \\!\n> \\! borp\n> sh: line 1: borp: command not found\n> \\echo :SHELL_EXIT_CODE\n> 127\n> -- bad backtick\n> \\set var `borp`\n> sh: line 1: borp: command not found\n> \\echo :SHELL_EXIT_CODE\n> 127\n> -- good \\!\n> \\! true\n> \\echo :SHELL_EXIT_CODE\n> 0\n> -- play with exit codes\n> \\! exit 4\n> \\echo :SHELL_EXIT_CODE\n> 4\n> \\set var `exit 3`\n> \\echo :SHELL_EXIT_CODE\n> 3\n>\n>\n> On Fri, Nov 4, 2022 at 5:08 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>>\n>> Over in\n>> https://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com Justin\n>> Pryzby suggested that psql might need the ability to capture the shell exit\n>> code.\n>>\n>> This is a POC patch that does that, but doesn't touch on the\n>> ON_ERROR_STOP stuff.\n>>\n>> I've added some very rudimentary tests, but haven't touched the\n>> documentation, because I strongly suspect that someone will suggest a\n>> better name for the variable.\n>>\n>> But basically, it works like this\n>>\n>> -- SHELL_EXIT_CODE is undefined\n>> \\echo :SHELL_EXIT_CODE\n>> :SHELL_EXIT_CODE\n>> -- bad \\!\n>> \\! borp\n>> sh: line 1: borp: command not found\n>> \\echo :SHELL_EXIT_CODE\n>> 32512\n>> -- bad backtick\n>> \\set var `borp`\n>> sh: line 1: borp: command not found\n>> \\echo :SHELL_EXIT_CODE\n>> 127\n>> -- good \\!\n>> \\! true\n>> \\echo :SHELL_EXIT_CODE\n>> 0\n>> -- play with exit codes\n>> \\! exit 4\n>> \\echo :SHELL_EXIT_CODE\n>> 1024\n>> \\set var `exit 3`\n>> \\echo :SHELL_EXIT_CODE\n>> 3\n>>\n>>\n>> Feedback welcome.\n>>\n>",
"msg_date": "Sun, 4 Dec 2022 00:35:39 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "I've rebased and updated the patch to include documentation.\n\nRegression tests have been moved to a separate patchfile because error\nmessages will vary by OS and configuration, so we probably can't do a\nstable regression test, but having them handy at least demonstrates the\nfeature.\n\nOn Sun, Dec 4, 2022 at 12:35 AM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> Rebased. Still waiting on feedback before working on documentation.\n>\n> On Fri, Nov 4, 2022 at 5:23 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>> Oops, that sample output was from a previous run, should have been:\n>>\n>> -- SHELL_EXIT_CODE is undefined\n>> \\echo :SHELL_EXIT_CODE\n>> :SHELL_EXIT_CODE\n>> -- bad \\!\n>> \\! borp\n>> sh: line 1: borp: command not found\n>> \\echo :SHELL_EXIT_CODE\n>> 127\n>> -- bad backtick\n>> \\set var `borp`\n>> sh: line 1: borp: command not found\n>> \\echo :SHELL_EXIT_CODE\n>> 127\n>> -- good \\!\n>> \\! true\n>> \\echo :SHELL_EXIT_CODE\n>> 0\n>> -- play with exit codes\n>> \\! exit 4\n>> \\echo :SHELL_EXIT_CODE\n>> 4\n>> \\set var `exit 3`\n>> \\echo :SHELL_EXIT_CODE\n>> 3\n>>\n>>\n>> On Fri, Nov 4, 2022 at 5:08 AM Corey Huinker <corey.huinker@gmail.com>\n>> wrote:\n>>\n>>>\n>>> Over in\n>>> https://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com Justin\n>>> Pryzby suggested that psql might need the ability to capture the shell exit\n>>> code.\n>>>\n>>> This is a POC patch that does that, but doesn't touch on the\n>>> ON_ERROR_STOP stuff.\n>>>\n>>> I've added some very rudimentary tests, but haven't touched the\n>>> documentation, because I strongly suspect that someone will suggest a\n>>> better name for the variable.\n>>>\n>>> But basically, it works like this\n>>>\n>>> -- SHELL_EXIT_CODE is undefined\n>>> \\echo :SHELL_EXIT_CODE\n>>> :SHELL_EXIT_CODE\n>>> -- bad \\!\n>>> \\! borp\n>>> sh: line 1: borp: command not found\n>>> \\echo :SHELL_EXIT_CODE\n>>> 32512\n>>> -- bad backtick\n>>> \\set var `borp`\n>>> sh: line 1: borp: command not found\n>>> \\echo :SHELL_EXIT_CODE\n>>> 127\n>>> -- good \\!\n>>> \\! true\n>>> \\echo :SHELL_EXIT_CODE\n>>> 0\n>>> -- play with exit codes\n>>> \\! exit 4\n>>> \\echo :SHELL_EXIT_CODE\n>>> 1024\n>>> \\set var `exit 3`\n>>> \\echo :SHELL_EXIT_CODE\n>>> 3\n>>>\n>>>\n>>> Feedback welcome.\n>>>\n>>",
"msg_date": "Wed, 21 Dec 2022 00:34:07 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Hi!\n\nThe patch is implementing what is declared to do. Shell return code is now\naccessible is psql var.\nOverall code is in a good condition. Applies with no errors on master.\nUnfortunately, regression tests are failing on the macOS due to the\ndifferent shell output.\n\n@@ -1308,13 +1308,13 @@\n deallocate q;\n -- test SHELL_EXIT_CODE\n \\! nosuchcommand\n-sh: line 1: nosuchcommand: command not found\n+sh: nosuchcommand: command not found\n \\echo :SHELL_EXIT_CODE\n 127\n \\set nosuchvar `nosuchcommand`\n-sh: line 1: nosuchcommand: command not found\n+sh: nosuchcommand: command not found\n \\! nosuchcommand\n-sh: line 1: nosuchcommand: command not found\n+sh: nosuchcommand: command not found\n \\echo :SHELL_EXIT_CODE\n 127\n\nSince we do not want to test shell output in these cases, but only return\ncode,\nwhat about using this kind of commands?\npostgres=# \\! true > /dev/null 2>&1\npostgres=# \\echo :SHELL_EXIT_CODE\n0\npostgres=# \\! false > /dev/null 2>&1\npostgres=# \\echo :SHELL_EXIT_CODE\n1\npostgres=# \\! nosuchcommand > /dev/null 2>&1\npostgres=# \\echo :SHELL_EXIT_CODE\n127\n\nIt is better to use spaces around \"=\".\n+ if (WIFEXITED(exit_code))\n+ exit_code=WEXITSTATUS(exit_code);\n+ else if(WIFSIGNALED(exit_code))\n+ exit_code=WTERMSIG(exit_code);\n+ else if(WIFSTOPPED(exit_code))\n+ exit_code=WSTOPSIG(exit_code);\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!The patch is implementing what is declared to do. Shell return code is now accessible is psql var.Overall code is in a good condition. Applies with no errors on master.Unfortunately, regression tests are failing on the macOS due to the different shell output.@@ -1308,13 +1308,13 @@ deallocate q; -- test SHELL_EXIT_CODE \\! nosuchcommand-sh: line 1: nosuchcommand: command not found+sh: nosuchcommand: command not found \\echo :SHELL_EXIT_CODE 127 \\set nosuchvar `nosuchcommand`-sh: line 1: nosuchcommand: command not found+sh: nosuchcommand: command not found \\! nosuchcommand-sh: line 1: nosuchcommand: command not found+sh: nosuchcommand: command not found \\echo :SHELL_EXIT_CODE 127Since we do not want to test shell output in these cases, but only return code,what about using this kind of commands? postgres=# \\! true > /dev/null 2>&1postgres=# \\echo :SHELL_EXIT_CODE0postgres=# \\! false > /dev/null 2>&1postgres=# \\echo :SHELL_EXIT_CODE1postgres=# \\! nosuchcommand > /dev/null 2>&1postgres=# \\echo :SHELL_EXIT_CODE127It is better to use spaces around \"=\".+ if (WIFEXITED(exit_code))+ exit_code=WEXITSTATUS(exit_code);+ else if(WIFSIGNALED(exit_code))+ exit_code=WTERMSIG(exit_code);+ else if(WIFSTOPPED(exit_code))+ exit_code=WSTOPSIG(exit_code);-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 28 Dec 2022 13:58:54 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 5:59 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Hi!\n>\n> The patch is implementing what is declared to do. Shell return code is now\n> accessible is psql var.\n> Overall code is in a good condition. Applies with no errors on master.\n> Unfortunately, regression tests are failing on the macOS due to the\n> different shell output.\n>\n\nThat was to be expected.\n\nI wonder if there is value in setting up a psql on/off var\nSHELL_ERROR_OUTPUT construct that when set to \"off/false\"\nsuppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if\n#ifdef WIN32). At the very least, it would allow for tests like this to be\ndone with standard regression scripts.\n\n>\n\nOn Wed, Dec 28, 2022 at 5:59 AM Maxim Orlov <orlovmg@gmail.com> wrote:Hi!The patch is implementing what is declared to do. Shell return code is now accessible is psql var.Overall code is in a good condition. Applies with no errors on master.Unfortunately, regression tests are failing on the macOS due to the different shell output.That was to be expected.I wonder if there is value in setting up a psql on/off var SHELL_ERROR_OUTPUT construct that when set to \"off/false\" suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if #ifdef WIN32). At the very least, it would allow for tests like this to be done with standard regression scripts.",
"msg_date": "Fri, 30 Dec 2022 14:17:19 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 2:17 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> On Wed, Dec 28, 2022 at 5:59 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n>> Hi!\n>>\n>> The patch is implementing what is declared to do. Shell return code is\n>> now accessible is psql var.\n>> Overall code is in a good condition. Applies with no errors on master.\n>> Unfortunately, regression tests are failing on the macOS due to the\n>> different shell output.\n>>\n>\n> That was to be expected.\n>\n> I wonder if there is value in setting up a psql on/off var\n> SHELL_ERROR_OUTPUT construct that when set to \"off/false\"\n> suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if\n> #ifdef WIN32). At the very least, it would allow for tests like this to be\n> done with standard regression scripts.\n>\n\nThinking on this some more a few ideas came up:\n\n1. The SHELL_ERROR_OUTPUT var above. This is good for simple situations,\nbut it would fail if the user took it upon themselves to redirect output,\nand suddenly \"myprog 2> /dev/null\" works fine on its own but will fail when\nwe append our own \"2> /dev/null\" to it.\n\n2. Exposing a PSQL var that identifies the shell snippet for \"how to\nsuppress standard error\", which would be \"2> /dev/null\" for -nix systems\nand \"2> NUL\" for windows systems. This again, presumes that users will\nactually need this feature, and I'm not convinced that they will.\n\n3. Exposing a PSQL var that is basically an \"is this environment running on\nwindows\" and let them construct it from there. That gets complicated with\nWSL and the like, so I'm not wild about this, even though it would be\ncomparatively simple to implement.\n\n4. We could inject an environment variable into our own regression tests\ndirectly based on the \"#ifdef WIN32\" in our own code, and we can use a\ncouple of \\gset commands to construct the error-suppressed shell commands\nas needed. This seems like the best starting point, and we can always turn\nthat env var into a real variable if it proves useful elsewhere.\n\nOn Fri, Dec 30, 2022 at 2:17 PM Corey Huinker <corey.huinker@gmail.com> wrote:On Wed, Dec 28, 2022 at 5:59 AM Maxim Orlov <orlovmg@gmail.com> wrote:Hi!The patch is implementing what is declared to do. Shell return code is now accessible is psql var.Overall code is in a good condition. Applies with no errors on master.Unfortunately, regression tests are failing on the macOS due to the different shell output.That was to be expected.I wonder if there is value in setting up a psql on/off var SHELL_ERROR_OUTPUT construct that when set to \"off/false\" suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if #ifdef WIN32). At the very least, it would allow for tests like this to be done with standard regression scripts.Thinking on this some more a few ideas came up:1. The SHELL_ERROR_OUTPUT var above. This is good for simple situations, but it would fail if the user took it upon themselves to redirect output, and suddenly \"myprog 2> /dev/null\" works fine on its own but will fail when we append our own \"2> /dev/null\" to it.2. Exposing a PSQL var that identifies the shell snippet for \"how to suppress standard error\", which would be \"2> /dev/null\" for -nix systems and \"2> NUL\" for windows systems. This again, presumes that users will actually need this feature, and I'm not convinced that they will.3. Exposing a PSQL var that is basically an \"is this environment running on windows\" and let them construct it from there. That gets complicated with WSL and the like, so I'm not wild about this, even though it would be comparatively simple to implement.4. We could inject an environment variable into our own regression tests directly based on the \"#ifdef WIN32\" in our own code, and we can use a couple of \\gset commands to construct the error-suppressed shell commands as needed. This seems like the best starting point, and we can always turn that env var into a real variable if it proves useful elsewhere.",
"msg_date": "Sat, 31 Dec 2022 16:47:02 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Sat, 31 Dec 2022 at 16:47, Corey Huinker <corey.huinker@gmail.com> wrote:\n\n>\n>> I wonder if there is value in setting up a psql on/off var\n>> SHELL_ERROR_OUTPUT construct that when set to \"off/false\"\n>> suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if\n>> #ifdef WIN32). At the very least, it would allow for tests like this to be\n>> done with standard regression scripts.\n>>\n>\n> Thinking on this some more a few ideas came up:\n>\n> 1. The SHELL_ERROR_OUTPUT var above. This is good for simple situations,\n> but it would fail if the user took it upon themselves to redirect output,\n> and suddenly \"myprog 2> /dev/null\" works fine on its own but will fail when\n> we append our own \"2> /dev/null\" to it.\n>\n\nRather than attempting to append redirection directives to the command,\nwould it work to redirect stderr before invoking the shell? This seems to\nme to be more reliable and it should allow an explicit redirection in the\nshell command to still work. The difference between Windows and Unix then\nbecomes the details of what system calls we use to accomplish the\nredirection (or maybe none, if an existing abstraction layer takes care of\nthat - I'm not really up on Postgres internals enough to know), rather than\nwhat we append to the provided command.\n\nOn Sat, 31 Dec 2022 at 16:47, Corey Huinker <corey.huinker@gmail.com> wrote:I wonder if there is value in setting up a psql on/off var SHELL_ERROR_OUTPUT construct that when set to \"off/false\" suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if #ifdef WIN32). At the very least, it would allow for tests like this to be done with standard regression scripts.Thinking on this some more a few ideas came up:1. The SHELL_ERROR_OUTPUT var above. This is good for simple situations, but it would fail if the user took it upon themselves to redirect output, and suddenly \"myprog 2> /dev/null\" works fine on its own but will fail when we append our own \"2> /dev/null\" to it.Rather than attempting to append redirection directives to the command, would it work to redirect stderr before invoking the shell? This seems to me to be more reliable and it should allow an explicit redirection in the shell command to still work. The difference between Windows and Unix then becomes the details of what system calls we use to accomplish the redirection (or maybe none, if an existing abstraction layer takes care of that - I'm not really up on Postgres internals enough to know), rather than what we append to the provided command.",
"msg_date": "Sat, 31 Dec 2022 17:28:40 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 5:28 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Sat, 31 Dec 2022 at 16:47, Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>>\n>>> I wonder if there is value in setting up a psql on/off var\n>>> SHELL_ERROR_OUTPUT construct that when set to \"off/false\"\n>>> suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if\n>>> #ifdef WIN32). At the very least, it would allow for tests like this to be\n>>> done with standard regression scripts.\n>>>\n>>\n>> Thinking on this some more a few ideas came up:\n>>\n>> 1. The SHELL_ERROR_OUTPUT var above. This is good for simple situations,\n>> but it would fail if the user took it upon themselves to redirect output,\n>> and suddenly \"myprog 2> /dev/null\" works fine on its own but will fail when\n>> we append our own \"2> /dev/null\" to it.\n>>\n>\n> Rather than attempting to append redirection directives to the command,\n> would it work to redirect stderr before invoking the shell? This seems to\n> me to be more reliable and it should allow an explicit redirection in the\n> shell command to still work. The difference between Windows and Unix then\n> becomes the details of what system calls we use to accomplish the\n> redirection (or maybe none, if an existing abstraction layer takes care of\n> that - I'm not really up on Postgres internals enough to know), rather than\n> what we append to the provided command.\n>\n>\nInside psql, it's a call to the system() function which takes a single\nstring. All output, stdout or stderr, is printed. So for the regression\ntest we have to compose a command that is OS appropriate AND suppresses\nstderr.\n\nOn Sat, Dec 31, 2022 at 5:28 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Sat, 31 Dec 2022 at 16:47, Corey Huinker <corey.huinker@gmail.com> wrote:I wonder if there is value in setting up a psql on/off var SHELL_ERROR_OUTPUT construct that when set to \"off/false\" suppresses standard error via appending \"2> /dev/null\" (or \"2> nul\" if #ifdef WIN32). At the very least, it would allow for tests like this to be done with standard regression scripts.Thinking on this some more a few ideas came up:1. The SHELL_ERROR_OUTPUT var above. This is good for simple situations, but it would fail if the user took it upon themselves to redirect output, and suddenly \"myprog 2> /dev/null\" works fine on its own but will fail when we append our own \"2> /dev/null\" to it.Rather than attempting to append redirection directives to the command, would it work to redirect stderr before invoking the shell? This seems to me to be more reliable and it should allow an explicit redirection in the shell command to still work. The difference between Windows and Unix then becomes the details of what system calls we use to accomplish the redirection (or maybe none, if an existing abstraction layer takes care of that - I'm not really up on Postgres internals enough to know), rather than what we append to the provided command.Inside psql, it's a call to the system() function which takes a single string. All output, stdout or stderr, is printed. So for the regression test we have to compose a command that is OS appropriate AND suppresses stderr.",
"msg_date": "Sat, 31 Dec 2022 19:15:36 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Wed, 21 Dec 2022 at 11:04, Corey Huinker <corey.huinker@gmail.com> wrote:\n>\n> I've rebased and updated the patch to include documentation.\n>\n> Regression tests have been moved to a separate patchfile because error messages will vary by OS and configuration, so we probably can't do a stable regression test, but having them handy at least demonstrates the feature.\n>\n> On Sun, Dec 4, 2022 at 12:35 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>>\n>> Rebased. Still waiting on feedback before working on documentation.\n>>\n>> On Fri, Nov 4, 2022 at 5:23 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>>>\n>>> Oops, that sample output was from a previous run, should have been:\n>>>\n>>> -- SHELL_EXIT_CODE is undefined\n>>> \\echo :SHELL_EXIT_CODE\n>>> :SHELL_EXIT_CODE\n>>> -- bad \\!\n>>> \\! borp\n>>> sh: line 1: borp: command not found\n>>> \\echo :SHELL_EXIT_CODE\n>>> 127\n>>> -- bad backtick\n>>> \\set var `borp`\n>>> sh: line 1: borp: command not found\n>>> \\echo :SHELL_EXIT_CODE\n>>> 127\n>>> -- good \\!\n>>> \\! true\n>>> \\echo :SHELL_EXIT_CODE\n>>> 0\n>>> -- play with exit codes\n>>> \\! exit 4\n>>> \\echo :SHELL_EXIT_CODE\n>>> 4\n>>> \\set var `exit 3`\n>>> \\echo :SHELL_EXIT_CODE\n>>> 3\n>>>\n>>>\n>>> On Fri, Nov 4, 2022 at 5:08 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>>>>\n>>>>\n>>>> Over in https://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com Justin Pryzby suggested that psql might need the ability to capture the shell exit code.\n>>>>\n>>>> This is a POC patch that does that, but doesn't touch on the ON_ERROR_STOP stuff.\n>>>>\n>>>> I've added some very rudimentary tests, but haven't touched the documentation, because I strongly suspect that someone will suggest a better name for the variable.\n>>>>\n>>>> But basically, it works like this\n>>>>\n>>>> -- SHELL_EXIT_CODE is undefined\n>>>> \\echo :SHELL_EXIT_CODE\n>>>> :SHELL_EXIT_CODE\n>>>> -- bad \\!\n>>>> \\! borp\n>>>> sh: line 1: borp: command not found\n>>>> \\echo :SHELL_EXIT_CODE\n>>>> 32512\n>>>> -- bad backtick\n>>>> \\set var `borp`\n>>>> sh: line 1: borp: command not found\n>>>> \\echo :SHELL_EXIT_CODE\n>>>> 127\n>>>> -- good \\!\n>>>> \\! true\n>>>> \\echo :SHELL_EXIT_CODE\n>>>> 0\n>>>> -- play with exit codes\n>>>> \\! exit 4\n>>>> \\echo :SHELL_EXIT_CODE\n>>>> 1024\n>>>> \\set var `exit 3`\n>>>> \\echo :SHELL_EXIT_CODE\n>>>> 3\n>>>>\n>>>>\n>>>> Feedback welcome.\n\nCFBot shows some compilation errors as in [1], please post an updated\nversion for the same:\n[02:35:49.924] psqlscanslash.l: In function ‘evaluate_backtick’:\n[02:35:49.924] psqlscanslash.l:822:11: error: implicit declaration of\nfunction ‘WIFSTOPPED’ [-Werror=implicit-function-declaration]\n[02:35:49.924] 822 | exit_code=WSTOPSIG(exit_code);\n[02:35:49.924] | ^~~~~~~~~~\n[02:35:49.924] psqlscanslash.l:823:14: error: implicit declaration of\nfunction ‘WSTOPSIG’ [-Werror=implicit-function-declaration]\n[02:35:49.924] 823 | }\n[02:35:49.924] | ^\n[02:35:49.924] cc1: all warnings being treated as errors\n[02:35:49.925] make[3]: *** [<builtin>: psqlscanslash.o] Error 1\n\n[1] - https://cirrus-ci.com/task/5424476720988160\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:06:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 5:36 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Wed, 21 Dec 2022 at 11:04, Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >\n> > I've rebased and updated the patch to include documentation.\n> >\n> > Regression tests have been moved to a separate patchfile because error\n> messages will vary by OS and configuration, so we probably can't do a\n> stable regression test, but having them handy at least demonstrates the\n> feature.\n> >\n> > On Sun, Dec 4, 2022 at 12:35 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >>\n> >> Rebased. Still waiting on feedback before working on documentation.\n> >>\n> >> On Fri, Nov 4, 2022 at 5:23 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >>>\n> >>> Oops, that sample output was from a previous run, should have been:\n> >>>\n> >>> -- SHELL_EXIT_CODE is undefined\n> >>> \\echo :SHELL_EXIT_CODE\n> >>> :SHELL_EXIT_CODE\n> >>> -- bad \\!\n> >>> \\! borp\n> >>> sh: line 1: borp: command not found\n> >>> \\echo :SHELL_EXIT_CODE\n> >>> 127\n> >>> -- bad backtick\n> >>> \\set var `borp`\n> >>> sh: line 1: borp: command not found\n> >>> \\echo :SHELL_EXIT_CODE\n> >>> 127\n> >>> -- good \\!\n> >>> \\! true\n> >>> \\echo :SHELL_EXIT_CODE\n> >>> 0\n> >>> -- play with exit codes\n> >>> \\! exit 4\n> >>> \\echo :SHELL_EXIT_CODE\n> >>> 4\n> >>> \\set var `exit 3`\n> >>> \\echo :SHELL_EXIT_CODE\n> >>> 3\n> >>>\n> >>>\n> >>> On Fri, Nov 4, 2022 at 5:08 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >>>>\n> >>>>\n> >>>> Over in\n> https://www.postgresql.org/message-id/eaf326ad693e74eba068f33a7f518039@oss.nttdata.com\n> Justin Pryzby suggested that psql might need the ability to capture the\n> shell exit code.\n> >>>>\n> >>>> This is a POC patch that does that, but doesn't touch on the\n> ON_ERROR_STOP stuff.\n> >>>>\n> >>>> I've added some very rudimentary tests, but haven't touched the\n> documentation, because I strongly suspect that someone will suggest a\n> better name for the variable.\n> >>>>\n> >>>> But basically, it works like this\n> >>>>\n> >>>> -- SHELL_EXIT_CODE is undefined\n> >>>> \\echo :SHELL_EXIT_CODE\n> >>>> :SHELL_EXIT_CODE\n> >>>> -- bad \\!\n> >>>> \\! borp\n> >>>> sh: line 1: borp: command not found\n> >>>> \\echo :SHELL_EXIT_CODE\n> >>>> 32512\n> >>>> -- bad backtick\n> >>>> \\set var `borp`\n> >>>> sh: line 1: borp: command not found\n> >>>> \\echo :SHELL_EXIT_CODE\n> >>>> 127\n> >>>> -- good \\!\n> >>>> \\! true\n> >>>> \\echo :SHELL_EXIT_CODE\n> >>>> 0\n> >>>> -- play with exit codes\n> >>>> \\! exit 4\n> >>>> \\echo :SHELL_EXIT_CODE\n> >>>> 1024\n> >>>> \\set var `exit 3`\n> >>>> \\echo :SHELL_EXIT_CODE\n> >>>> 3\n> >>>>\n> >>>>\n> >>>> Feedback welcome.\n>\n> CFBot shows some compilation errors as in [1], please post an updated\n> version for the same:\n> [02:35:49.924] psqlscanslash.l: In function ‘evaluate_backtick’:\n> [02:35:49.924] psqlscanslash.l:822:11: error: implicit declaration of\n> function ‘WIFSTOPPED’ [-Werror=implicit-function-declaration]\n> [02:35:49.924] 822 | exit_code=WSTOPSIG(exit_code);\n> [02:35:49.924] | ^~~~~~~~~~\n> [02:35:49.924] psqlscanslash.l:823:14: error: implicit declaration of\n> function ‘WSTOPSIG’ [-Werror=implicit-function-declaration]\n> [02:35:49.924] 823 | }\n> [02:35:49.924] | ^\n> [02:35:49.924] cc1: all warnings being treated as errors\n> [02:35:49.925] make[3]: *** [<builtin>: psqlscanslash.o] Error 1\n>\n> [1] - https://cirrus-ci.com/task/5424476720988160\n>\n> Regards,\n> Vignesh\n>\n\nThanks. I had left sys/wait.h out of psqlscanslash.\n\nAttached is v3 of this patch, I've made the following changes:\n\n1. pg_regress now creates an environment variable called PG_OS_TARGET,\nwhich regression tests can use to manufacture os-specific commands. For our\npurposes, this allows the regression test to manufacture a shell command\nthat has either \"2> /dev/null\" or \"2> NUL\". This seemed the least invasive\nway to make this possible. If for some reason it becomes useful in general\npsql scripting, then we can consider promoting it to a regular psql var.\n\n2. There are now two psql variables, SHELL_EXIT_CODE, which has the return\ncode, and SHELL_ERROR, which is a true/false flag based on whether the exit\ncode was nonzero. These variables are the shell result analogues of\nSQLSTATE and ERROR.",
"msg_date": "Wed, 4 Jan 2023 02:09:23 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Hi!\n\nIn overall, I think we move in the right direction. But we could make code\nbetter, should we?\n\n+ /* Capture exit code for SHELL_EXIT_CODE */\n+ close_exit_code = pclose(fd);\n+ if (close_exit_code == -1)\n+ {\n+ pg_log_error(\"%s: %m\", cmd);\n+ error = true;\n+ }\n+ if (WIFEXITED(close_exit_code))\n+ exit_code=WEXITSTATUS(close_exit_code);\n+ else if(WIFSIGNALED(close_exit_code))\n+ exit_code=WTERMSIG(close_exit_code);\n+ else if(WIFSTOPPED(close_exit_code))\n+ exit_code=WSTOPSIG(close_exit_code);\n+ if (exit_code)\n+ error = true;\nI think, it's better to add spaces around middle if block. It will be easy\nto read.\nAlso, consider, adding spaces around assignment in this block.\n\n+ /*\n+ snprintf(exit_code_buf, sizeof(exit_code_buf), \"%d\",\nWEXITSTATUS(exit_code));\n+ */\nProbably, this is not needed.\n\n\n> 1. pg_regress now creates an environment variable called PG_OS_TARGET\nMaybe, we can use env \"OS\"? I do not know much about Windows, but I think\nthis is kind of standard environment variable there.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!In overall, I think we move in the right direction. But we could make code better, should we?+ /* Capture exit code for SHELL_EXIT_CODE */+ close_exit_code = pclose(fd);+ if (close_exit_code == -1)+ {+ pg_log_error(\"%s: %m\", cmd);+ error = true;+ }+ if (WIFEXITED(close_exit_code))+ exit_code=WEXITSTATUS(close_exit_code);+ else if(WIFSIGNALED(close_exit_code))+ exit_code=WTERMSIG(close_exit_code);+ else if(WIFSTOPPED(close_exit_code))+ exit_code=WSTOPSIG(close_exit_code);+ if (exit_code)+ error = true;I think, it's better to add spaces around middle if block. It will be easy to read.Also, consider, adding spaces around assignment in this block.+ /*+ snprintf(exit_code_buf, sizeof(exit_code_buf), \"%d\", WEXITSTATUS(exit_code));+ */Probably, this is not needed.> 1. pg_regress now creates an environment variable called PG_OS_TARGETMaybe, we can use env \"OS\"? I do not know much about Windows, but I think this is kind of standard environment variable there.-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 9 Jan 2023 18:01:37 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 10:01 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Hi!\n>\n> In overall, I think we move in the right direction. But we could make code\n> better, should we?\n>\n> + /* Capture exit code for SHELL_EXIT_CODE */\n> + close_exit_code = pclose(fd);\n> + if (close_exit_code == -1)\n> + {\n> + pg_log_error(\"%s: %m\", cmd);\n> + error = true;\n> + }\n> + if (WIFEXITED(close_exit_code))\n> + exit_code=WEXITSTATUS(close_exit_code);\n> + else if(WIFSIGNALED(close_exit_code))\n> + exit_code=WTERMSIG(close_exit_code);\n> + else if(WIFSTOPPED(close_exit_code))\n> + exit_code=WSTOPSIG(close_exit_code);\n> + if (exit_code)\n> + error = true;\n> I think, it's better to add spaces around middle if block. It will be easy\n> to read.\n> Also, consider, adding spaces around assignment in this block.\n>\n\nNoted and will implement.\n\n\n> + /*\n> + snprintf(exit_code_buf, sizeof(exit_code_buf), \"%d\",\n> WEXITSTATUS(exit_code));\n> + */\n> Probably, this is not needed.\n>\n\nHeh. Oops.\n\n\n> > 1. pg_regress now creates an environment variable called PG_OS_TARGET\n> Maybe, we can use env \"OS\"? I do not know much about Windows, but I think\n> this is kind of standard environment variable there.\n>\n\nI chose a name that would avoid collisions with anything a user might\npotentially throw into their environment, so if the var \"OS\" is fairly\nstandard is a reason to avoid using it. Also, going with our own env var\nallows us to stay in perfect synchronization with the build's #ifdef WIN32\n... and whatever #ifdefs may come in the future for new OSes. If there is\nalready an environment variable that does that for us, I would rather use\nthat, but I haven't found it.\n\nThe 0001 patch is unchanged from last time (aside from anything rebasing\nmight have done).\n\n>",
"msg_date": "Mon, 9 Jan 2023 13:36:12 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 21:36, Corey Huinker <corey.huinker@gmail.com> wrote:\n\n>\n> I chose a name that would avoid collisions with anything a user might\n> potentially throw into their environment, so if the var \"OS\" is fairly\n> standard is a reason to avoid using it. Also, going with our own env var\n> allows us to stay in perfect synchronization with the build's #ifdef WIN32\n> ... and whatever #ifdefs may come in the future for new OSes. If there is\n> already an environment variable that does that for us, I would rather use\n> that, but I haven't found it.\n>\n> Perhaps, I didn't make myself clear. Your solution is perfectly adapted to\nour needs.\nBut all Windows since 2000 already have an environment variable\nOS=Windows_NT. So, if env OS is defined and equal Windows_NT, this had to\nbe Windows.\nMay we use it in our case? I don't insist, just asking.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Mon, 9 Jan 2023 at 21:36, Corey Huinker <corey.huinker@gmail.com> wrote:I chose a name that would avoid collisions with anything a user might potentially throw into their environment, so if the var \"OS\" is fairly standard is a reason to avoid using it. Also, going with our own env var allows us to stay in perfect synchronization with the build's #ifdef WIN32 ... and whatever #ifdefs may come in the future for new OSes. If there is already an environment variable that does that for us, I would rather use that, but I haven't found it.Perhaps, I didn't make myself clear. Your solution is perfectly adapted to our needs.But all Windows since 2000 already have an environment variable OS=Windows_NT. So, if env OS is defined and equal Windows_NT, this had to be Windows.May we use it in our case? I don't insist, just asking.-- Best regards,Maxim Orlov.",
"msg_date": "Tue, 10 Jan 2023 11:54:15 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 3:54 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n\n>\n>\n> On Mon, 9 Jan 2023 at 21:36, Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>>\n>> I chose a name that would avoid collisions with anything a user might\n>> potentially throw into their environment, so if the var \"OS\" is fairly\n>> standard is a reason to avoid using it. Also, going with our own env var\n>> allows us to stay in perfect synchronization with the build's #ifdef WIN32\n>> ... and whatever #ifdefs may come in the future for new OSes. If there is\n>> already an environment variable that does that for us, I would rather use\n>> that, but I haven't found it.\n>>\n>> Perhaps, I didn't make myself clear. Your solution is perfectly adapted\n> to our needs.\n> But all Windows since 2000 already have an environment variable\n> OS=Windows_NT. So, if env OS is defined and equal Windows_NT, this had to\n> be Windows.\n> May we use it in our case? I don't insist, just asking.\n>\n\nAh, that makes more sense. I don't have a strong opinion on which we should\nuse, and I would probably defer to someone who does windows (and possibly\ncygwin) builds more often than I do.\n\nOn Tue, Jan 10, 2023 at 3:54 AM Maxim Orlov <orlovmg@gmail.com> wrote:On Mon, 9 Jan 2023 at 21:36, Corey Huinker <corey.huinker@gmail.com> wrote:I chose a name that would avoid collisions with anything a user might potentially throw into their environment, so if the var \"OS\" is fairly standard is a reason to avoid using it. Also, going with our own env var allows us to stay in perfect synchronization with the build's #ifdef WIN32 ... and whatever #ifdefs may come in the future for new OSes. If there is already an environment variable that does that for us, I would rather use that, but I haven't found it.Perhaps, I didn't make myself clear. Your solution is perfectly adapted to our needs.But all Windows since 2000 already have an environment variable OS=Windows_NT. So, if env OS is defined and equal Windows_NT, this had to be Windows.May we use it in our case? I don't insist, just asking.Ah, that makes more sense. I don't have a strong opinion on which we should use, and I would probably defer to someone who does windows (and possibly cygwin) builds more often than I do.",
"msg_date": "Tue, 10 Jan 2023 15:31:44 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 00:06, Corey Huinker <corey.huinker@gmail.com> wrote:\n>\n> On Mon, Jan 9, 2023 at 10:01 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n>>\n>> Hi!\n>>\n>> In overall, I think we move in the right direction. But we could make code better, should we?\n>>\n>> + /* Capture exit code for SHELL_EXIT_CODE */\n>> + close_exit_code = pclose(fd);\n>> + if (close_exit_code == -1)\n>> + {\n>> + pg_log_error(\"%s: %m\", cmd);\n>> + error = true;\n>> + }\n>> + if (WIFEXITED(close_exit_code))\n>> + exit_code=WEXITSTATUS(close_exit_code);\n>> + else if(WIFSIGNALED(close_exit_code))\n>> + exit_code=WTERMSIG(close_exit_code);\n>> + else if(WIFSTOPPED(close_exit_code))\n>> + exit_code=WSTOPSIG(close_exit_code);\n>> + if (exit_code)\n>> + error = true;\n>> I think, it's better to add spaces around middle if block. It will be easy to read.\n>> Also, consider, adding spaces around assignment in this block.\n>\n>\n> Noted and will implement.\n>\n>>\n>> + /*\n>> + snprintf(exit_code_buf, sizeof(exit_code_buf), \"%d\", WEXITSTATUS(exit_code));\n>> + */\n>> Probably, this is not needed.\n>\n>\n> Heh. Oops.\n>\n>>\n>> > 1. pg_regress now creates an environment variable called PG_OS_TARGET\n>> Maybe, we can use env \"OS\"? I do not know much about Windows, but I think this is kind of standard environment variable there.\n>\n>\n> I chose a name that would avoid collisions with anything a user might potentially throw into their environment, so if the var \"OS\" is fairly standard is a reason to avoid using it. Also, going with our own env var allows us to stay in perfect synchronization with the build's #ifdef WIN32 ... and whatever #ifdefs may come in the future for new OSes. If there is already an environment variable that does that for us, I would rather use that, but I haven't found it.\n>\n> The 0001 patch is unchanged from last time (aside from anything rebasing might have done).\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\n3c6fc58209f24b959ee18f5d19ef96403d08f15c ===\n=== applying patch\n./v4-0002-Add-psql-variables-SHELL_ERROR-and-SHELL_EXIT_COD.patch\npatching file doc/src/sgml/ref/psql-ref.sgml\nHunk #1 FAILED at 4264.\n1 out of 1 hunk FAILED -- saving rejects to file\ndoc/src/sgml/ref/psql-ref.sgml.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4073.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 08:23:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": ">\n>\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n>\n>\nConflict was due to the doc patch applying id tags to psql variable names.\nI've rebased and added my own id tags to the two new variables.",
"msg_date": "Wed, 11 Jan 2023 17:51:37 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Unfortunately, cirrus-ci still not happy\nhttps://cirrus-ci.com/task/6502730475241472\n\n[23:14:34.206] time make -s -j${BUILD_JOBS} world-bin\n[23:14:43.174] psqlscanslash.l: In function ‘evaluate_backtick’:\n[23:14:43.174] psqlscanslash.l:827:11: error: implicit declaration of\nfunction ‘WIFSTOPPED’ [-Werror=implicit-function-declaration]\n[23:14:43.174] 827 | exit_code = WSTOPSIG(close_exit_code);\n[23:14:43.174] | ^~~~~~~~~~\n[23:14:43.174] psqlscanslash.l:828:16: error: implicit declaration of\nfunction ‘WSTOPSIG’ [-Werror=implicit-function-declaration]\n[23:14:43.174] 828 |\n[23:14:43.174] | ^\n[23:14:43.174] cc1: all warnings being treated as errors\n\n>\nand on FreeBSD\n\n[23:13:03.914] cc -o ...\n[23:13:03.914] ld: error: undefined symbol: WEXITSTATUS\n[23:13:03.914] >>> referenced by command.c:5036\n(../src/bin/psql/command.c:5036)\n[23:13:03.914] >>>\nsrc/bin/psql/psql.p/command.c.o:(exec_command_shell_escape)\n[23:13:03.914] cc: error: linker command failed with exit code 1 (use -v to\nsee invocation)\n\nand on Windows\n\n[23:19:51.870] meson-generated_.._psqlscanslash.c.obj : error LNK2019:\nunresolved external symbol WIFSTOPPED referenced in function\nevaluate_backtick\n[23:19:52.197] meson-generated_.._psqlscanslash.c.obj : error LNK2019:\nunresolved external symbol WSTOPSIG referenced in function evaluate_backtick\n[23:19:52.197] src\\bin\\psql\\psql.exe : fatal error LNK1120: 2 unresolved\nexternals\n\nI belive, we need proper includes.\n\n-- \nBest regards,\nMaxim Orlov.\n\nUnfortunately, cirrus-ci still not happy https://cirrus-ci.com/task/6502730475241472[23:14:34.206] time make -s -j${BUILD_JOBS} world-bin[23:14:43.174] psqlscanslash.l: In function ‘evaluate_backtick’:[23:14:43.174] psqlscanslash.l:827:11: error: implicit declaration of function ‘WIFSTOPPED’ [-Werror=implicit-function-declaration][23:14:43.174] 827 | exit_code = WSTOPSIG(close_exit_code);[23:14:43.174] | ^~~~~~~~~~[23:14:43.174] psqlscanslash.l:828:16: error: implicit declaration of function ‘WSTOPSIG’ [-Werror=implicit-function-declaration][23:14:43.174] 828 | [23:14:43.174] | ^ [23:14:43.174] cc1: all warnings being treated as errors\nand on FreeBSD[23:13:03.914] cc -o ...[23:13:03.914] ld: error: undefined symbol: WEXITSTATUS[23:13:03.914] >>> referenced by command.c:5036 (../src/bin/psql/command.c:5036)[23:13:03.914] >>> src/bin/psql/psql.p/command.c.o:(exec_command_shell_escape)[23:13:03.914] cc: error: linker command failed with exit code 1 (use -v to see invocation)and on Windows[23:19:51.870] meson-generated_.._psqlscanslash.c.obj : error LNK2019: unresolved external symbol WIFSTOPPED referenced in function evaluate_backtick[23:19:52.197] meson-generated_.._psqlscanslash.c.obj : error LNK2019: unresolved external symbol WSTOPSIG referenced in function evaluate_backtick[23:19:52.197] src\\bin\\psql\\psql.exe : fatal error LNK1120: 2 unresolved externalsI belive, we need proper includes.-- Best regards,Maxim Orlov.",
"msg_date": "Thu, 12 Jan 2023 19:14:59 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": ">\n> I belive, we need proper includes.\n>\n\nGiven that wait_error.c already seems to have the right includes worked out\nfor WEXITSTATUS/WIFSTOPPED/etc, I decided to just add a function there.\n\nI named it wait_result_to_exit_code(), but I welcome suggestions of a\nbetter name.",
"msg_date": "Thu, 12 Jan 2023 23:50:26 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Fri, 13 Jan 2023 at 07:50, Corey Huinker <corey.huinker@gmail.com> wrote:\n\n>\n> I named it wait_result_to_exit_code(), but I welcome suggestions of a\n> better name.\n>\n\nThanks! But CF bot still not happy. I think, we should address issues from\nhere https://cirrus-ci.com/task/5391002618298368\n\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 13 Jan 2023 at 07:50, Corey Huinker <corey.huinker@gmail.com> wrote:I named it wait_result_to_exit_code(), but I welcome suggestions of a better name. Thanks! But CF bot still not happy. I think, we should address issues from here https://cirrus-ci.com/task/5391002618298368-- Best regards,Maxim Orlov.",
"msg_date": "Fri, 20 Jan 2023 12:19:37 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 2:09 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> 2. There are now two psql variables, SHELL_EXIT_CODE, which has the return code, and SHELL_ERROR, which is a true/false flag based on whether the exit code was nonzero. These variables are the shell result analogues of SQLSTATE and ERROR.\n\nSeems redundant.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Jan 2023 08:54:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 8:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jan 4, 2023 at 2:09 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> > 2. There are now two psql variables, SHELL_EXIT_CODE, which has the\n> return code, and SHELL_ERROR, which is a true/false flag based on whether\n> the exit code was nonzero. These variables are the shell result analogues\n> of SQLSTATE and ERROR.\n>\n> Seems redundant.\n>\n\nSHELL_ERROR is helpful in that it is a ready-made boolean that works for\n\\if tests in the same way that ERROR is set to true any time SQLSTATE is\nnonzero. We don't yet have inline expressions for \\if so the ready-made\nboolean is a convenience.\n\nOr are you suggesting that I should have just set ERROR itself rather than\ncreate SHELL_ERROR?\n\nOn Fri, Jan 20, 2023 at 8:54 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Jan 4, 2023 at 2:09 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n> 2. There are now two psql variables, SHELL_EXIT_CODE, which has the return code, and SHELL_ERROR, which is a true/false flag based on whether the exit code was nonzero. These variables are the shell result analogues of SQLSTATE and ERROR.\n\nSeems redundant.SHELL_ERROR is helpful in that it is a ready-made boolean that works for \\if tests in the same way that ERROR is set to true any time SQLSTATE is nonzero. We don't yet have inline expressions for \\if so the ready-made boolean is a convenience.Or are you suggesting that I should have just set ERROR itself rather than create SHELL_ERROR?",
"msg_date": "Mon, 23 Jan 2023 13:59:29 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 1:59 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> SHELL_ERROR is helpful in that it is a ready-made boolean that works for \\if tests in the same way that ERROR is set to true any time SQLSTATE is nonzero. We don't yet have inline expressions for \\if so the ready-made boolean is a convenience.\n\nOh, that seems a bit sad, but I guess it makes sense.\n\n> Or are you suggesting that I should have just set ERROR itself rather than create SHELL_ERROR?\n\nNo, I wasn't suggesting that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Jan 2023 14:53:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 2:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 23, 2023 at 1:59 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> > SHELL_ERROR is helpful in that it is a ready-made boolean that works for\n> \\if tests in the same way that ERROR is set to true any time SQLSTATE is\n> nonzero. We don't yet have inline expressions for \\if so the ready-made\n> boolean is a convenience.\n>\n> Oh, that seems a bit sad, but I guess it makes sense.\n>\n\nI agree, but there hasn't been much appetite for deciding what expressions\nwould look like, or how we'd implement it. My instinct would be to not\ncreate our own expression engine, but instead integrate one that is already\navailable. For my needs, the Unix `expr` command would be ideal (compares\nnumbers and strings, can do regexes, can do complex expressions), but it's\nnot cross platform.\n\nOn Mon, Jan 23, 2023 at 2:53 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 23, 2023 at 1:59 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> SHELL_ERROR is helpful in that it is a ready-made boolean that works for \\if tests in the same way that ERROR is set to true any time SQLSTATE is nonzero. We don't yet have inline expressions for \\if so the ready-made boolean is a convenience.\n\nOh, that seems a bit sad, but I guess it makes sense.I agree, but there hasn't been much appetite for deciding what expressions would look like, or how we'd implement it. My instinct would be to not create our own expression engine, but instead integrate one that is already available. For my needs, the Unix `expr` command would be ideal (compares numbers and strings, can do regexes, can do complex expressions), but it's not cross platform.",
"msg_date": "Mon, 23 Jan 2023 15:31:33 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": ">\n> Thanks! But CF bot still not happy. I think, we should address issues from\n> here https://cirrus-ci.com/task/5391002618298368\n>\n\nSure enough, exit codes are shell dependent...adjusted the tests to reflect\nthat.",
"msg_date": "Mon, 23 Jan 2023 16:50:05 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Unfortunately, there is a fail in FreeBSD\nhttps://cirrus-ci.com/task/6466749487382528\n\nMaybe, this patch is need to be rebased?\n\n-- \nBest regards,\nMaxim Orlov.\n\nUnfortunately, there is a fail in FreeBSD https://cirrus-ci.com/task/6466749487382528Maybe, this patch is need to be rebased?-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 30 Jan 2023 13:22:31 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": ">\n>\n> Unfortunately, there is a fail in FreeBSD\n> https://cirrus-ci.com/task/6466749487382528\n>\n> Maybe, this patch is need to be rebased?\n>\n>\nThat failure is in postgres_fdw, which this code doesn't touch.\n\nI'm not able to get\nto /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/regression.out\n- It's not in either of the browseable zip files and the 2nd zip file isn't\ndownloadable, so it's hard to see what's going on.\n\nI rebased, but there are no code differences.",
"msg_date": "Mon, 30 Jan 2023 15:23:21 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Mon, 30 Jan 2023 at 23:23, Corey Huinker <corey.huinker@gmail.com> wrote:\n\n>\n>\n> I rebased, but there are no code differences.\n>\n\nThe patch set seem to be in a good shape and pretty stable for quite a\nwhile.\nCould you add one more minor improvement, a new line after variables\ndeclaration?\n\n+ int exit_code =\nwait_result_to_exit_code(result);\n+ char buf[32];\n...here\n+ snprintf(buf, sizeof(buf), \"%d\", exit_code);\n+ SetVariable(pset.vars, \"SHELL_EXIT_CODE\", buf);\n+ SetVariable(pset.vars, \"SHELL_ERROR\", \"true\");\n\n+ char exit_code_buf[32];\n... and here\n+ snprintf(exit_code_buf, sizeof(exit_code_buf), \"%d\",\n+ wait_result_to_exit_code(exit_code));\n+ SetVariable(pset.vars, \"SHELL_EXIT_CODE\", exit_code_buf);\n+ SetVariable(pset.vars, \"SHELL_ERROR\", \"true\");\n\nAfter this changes, I think, we make this patch RfC, shall we?\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Mon, 30 Jan 2023 at 23:23, Corey Huinker <corey.huinker@gmail.com> wrote:I rebased, but there are no code differences. The patch set seem to be in a good shape and pretty stable for quite a while.Could you add one more minor improvement, a new line after variables declaration?+ int exit_code = wait_result_to_exit_code(result);+ char buf[32];...here+ snprintf(buf, sizeof(buf), \"%d\", exit_code);+ SetVariable(pset.vars, \"SHELL_EXIT_CODE\", buf);+ SetVariable(pset.vars, \"SHELL_ERROR\", \"true\");+ char exit_code_buf[32];... and here+ snprintf(exit_code_buf, sizeof(exit_code_buf), \"%d\",+ wait_result_to_exit_code(exit_code));+ SetVariable(pset.vars, \"SHELL_EXIT_CODE\", exit_code_buf);+ SetVariable(pset.vars, \"SHELL_ERROR\", \"true\"); After this changes, I think, we make this patch RfC, shall we?-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 22 Feb 2023 17:43:45 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": ">\n>\n>\n> The patch set seem to be in a good shape and pretty stable for quite a\n> while.\n> Could you add one more minor improvement, a new line after variables\n> declaration?\n>\n\nAs you wish. Attached.\n\n\n>\n> After this changes, I think, we make this patch RfC, shall we?\n>\n>\nFingers crossed.",
"msg_date": "Wed, 22 Feb 2023 15:04:33 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 9:23 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> Unfortunately, there is a fail in FreeBSD https://cirrus-ci.com/task/6466749487382528\n>>\n>> Maybe, this patch is need to be rebased?\n>>\n>\n> That failure is in postgres_fdw, which this code doesn't touch.\n>\n> I'm not able to get to /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/regression.out - It's not in either of the browseable zip files and the 2nd zip file isn't downloadable, so it's hard to see what's going on.\n\nYeah, that'd be our current top flapping CI test[1]. These new\n\"*-running\" tests (like the old \"make installcheck\") are only enabled\nin the FreeBSD CI task currently, so that's why the failures only show\nup there. I see[2] half a dozen failures of the \"fdw_retry_check\"\nthing in the past ~24 hours.\n\n[1] https://www.postgresql.org/message-id/flat/20221209001511.n3vqodxobqgscuil%40awork3.anarazel.de\n[2] http://cfbot.cputube.org/highlights/regress.html\n\n\n",
"msg_date": "Thu, 23 Feb 2023 10:17:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 4:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, Jan 31, 2023 at 9:23 AM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >> Unfortunately, there is a fail in FreeBSD\n> https://cirrus-ci.com/task/6466749487382528\n> >>\n> >> Maybe, this patch is need to be rebased?\n> >>\n> >\n> > That failure is in postgres_fdw, which this code doesn't touch.\n> >\n> > I'm not able to get to\n> /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/regression.out\n> - It's not in either of the browseable zip files and the 2nd zip file isn't\n> downloadable, so it's hard to see what's going on.\n>\n> Yeah, that'd be our current top flapping CI test[1]. These new\n> \"*-running\" tests (like the old \"make installcheck\") are only enabled\n> in the FreeBSD CI task currently, so that's why the failures only show\n> up there. I see[2] half a dozen failures of the \"fdw_retry_check\"\n> thing in the past ~24 hours.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/20221209001511.n3vqodxobqgscuil%40awork3.anarazel.de\n> [2] http://cfbot.cputube.org/highlights/regress.html\n\n\nThanks for the explanation. I was baffled.\n\nOn Wed, Feb 22, 2023 at 4:18 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Tue, Jan 31, 2023 at 9:23 AM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> Unfortunately, there is a fail in FreeBSD https://cirrus-ci.com/task/6466749487382528\n>>\n>> Maybe, this patch is need to be rebased?\n>>\n>\n> That failure is in postgres_fdw, which this code doesn't touch.\n>\n> I'm not able to get to /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/regression.out - It's not in either of the browseable zip files and the 2nd zip file isn't downloadable, so it's hard to see what's going on.\n\nYeah, that'd be our current top flapping CI test[1]. These new\n\"*-running\" tests (like the old \"make installcheck\") are only enabled\nin the FreeBSD CI task currently, so that's why the failures only show\nup there. I see[2] half a dozen failures of the \"fdw_retry_check\"\nthing in the past ~24 hours.\n\n[1] https://www.postgresql.org/message-id/flat/20221209001511.n3vqodxobqgscuil%40awork3.anarazel.de\n[2] http://cfbot.cputube.org/highlights/regress.htmlThanks for the explanation. I was baffled.",
"msg_date": "Thu, 23 Feb 2023 02:30:08 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> [ v9-0003-Add-psql-variables-SHELL_ERROR-and-SHELL_EXIT_COD.patch ]\n\nI took a brief look through this. I'm on board with the general\nconcept, but I think you've spent too much time on an ultimately\nfutile attempt to get a committable test case, and not enough time\non making the behavior correct/usable. In particular, it bothers\nme that there doesn't appear to be any way to distinguish whether\na command failed on nonzero exit code versus a signal. Also,\nI see that you're willing to report whatever ferror returns\n(something quite unspecified in relevant standards, beyond\nzero/nonzero) as an equally-non-distinguishable integer.\n\nI'm tempted to suggest that we ought to be returning a string,\nalong the lines of \"OK\" or \"exit code N\" or \"signal N\" or\n\"I/O error\". This though brings up the question of exactly\nwhat you expect scripts to use the variable for. Maybe such\na definition would be unhelpful, but if so why? Maybe we\nshould content ourselves with adding the SHELL_ERROR boolean, and\nleave the details to whatever we print in the error message?\n\nAs for the test case, I'm unimpressed with it because it's so\nweak as to be nearly useless; plus it seems like a potential\nsecurity issue (what if \"nosuchcommand\" exists?). It's fine\nto have it during development, I guess, but I'm inclined to\nleave it out of the eventual commit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 13:35:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 1:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > [ v9-0003-Add-psql-variables-SHELL_ERROR-and-SHELL_EXIT_COD.patch ]\n>\n> I took a brief look through this. I'm on board with the general\n> concept, but I think you've spent too much time on an ultimately\n> futile attempt to get a committable test case, and not enough time\n>\n\nI didn't want to give the impression that I was avoiding/dismissing the\ntest case, and at some point I got curious how far we could push pg_regress.\n\n\n> on making the behavior correct/usable. In particular, it bothers\n> me that there doesn't appear to be any way to distinguish whether\n> a command failed on nonzero exit code versus a signal. Also,\n>\n\nThat's an intriguing distinction, and one I hadn't considered, mostly\nbecause I assumed that any command with a set of exit codes rich enough to\nwarrant inspection would have a separate exit code for i-was-terminated.\n\nIt would certainly be possible to have two independent booleans,\nSHELL_ERROR and SHELL_SIGNAL.\n\n\n> I see that you're willing to report whatever ferror returns\n> (something quite unspecified in relevant standards, beyond\n> zero/nonzero) as an equally-non-distinguishable integer.\n>\n\nI had imagined users of this feature falling into two camps:\n\n1. Users writing scripts for their specific environment, with the ability\nto know what exit codes are possible and different desired courses of\naction based on those codes.\n2. Users in no specific environment, content with SHELL_ERROR boolean, who\nare probably just going to test for failure, and if so either \\set a\ndefault or \\echo a message and \\quit.\n\n\n\n>\n> I'm tempted to suggest that we ought to be returning a string,\n> along the lines of \"OK\" or \"exit code N\" or \"signal N\" or\n> \"I/O error\". This though brings up the question of exactly\n> what you expect scripts to use the variable for. Maybe such\n> a definition would be unhelpful, but if so why? Maybe we\n> should content ourselves with adding the SHELL_ERROR boolean, and\n> leave the details to whatever we print in the error message?\n>\n\nHaving a curated list of responses like \"OK\" and \"exit code N\" is also\nintriguing, but could be hard to unpack given psql's limited string\nmanipulation capabilities.\n\nI think the SHELL_ERROR boolean solves most cases, but it was suggested in\nhttps://www.postgresql.org/message-id/20221102115801.GG16921@telsasoft.com\nthat users might want more detail than that, even if that detail is\nunspecified and highly system dependent.\n\n\n>\n> As for the test case, I'm unimpressed with it because it's so\n> weak as to be nearly useless; plus it seems like a potential\n> security issue (what if \"nosuchcommand\" exists?). It's fine\n> to have it during development, I guess, but I'm inclined to\n> leave it out of the eventual commit.\n>\n>\nI have no objection to leaving it out. I think it proves that writing\nos-specific pg_regress tests is hard, and little else.\n\nOn Thu, Mar 2, 2023 at 1:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> [ v9-0003-Add-psql-variables-SHELL_ERROR-and-SHELL_EXIT_COD.patch ]\n\nI took a brief look through this. I'm on board with the general\nconcept, but I think you've spent too much time on an ultimately\nfutile attempt to get a committable test case, and not enough timeI didn't want to give the impression that I was avoiding/dismissing the test case, and at some point I got curious how far we could push pg_regress. \non making the behavior correct/usable. In particular, it bothers\nme that there doesn't appear to be any way to distinguish whether\na command failed on nonzero exit code versus a signal. Also,That's an intriguing distinction, and one I hadn't considered, mostly because I assumed that any command with a set of exit codes rich enough to warrant inspection would have a separate exit code for i-was-terminated.It would certainly be possible to have two independent booleans, SHELL_ERROR and SHELL_SIGNAL. \nI see that you're willing to report whatever ferror returns\n(something quite unspecified in relevant standards, beyond\nzero/nonzero) as an equally-non-distinguishable integer.I had imagined users of this feature falling into two camps:1. Users writing scripts for their specific environment, with the ability to know what exit codes are possible and different desired courses of action based on those codes.2. Users in no specific environment, content with SHELL_ERROR boolean, who are probably just going to test for failure, and if so either \\set a default or \\echo a message and \\quit. \n\nI'm tempted to suggest that we ought to be returning a string,\nalong the lines of \"OK\" or \"exit code N\" or \"signal N\" or\n\"I/O error\". This though brings up the question of exactly\nwhat you expect scripts to use the variable for. Maybe such\na definition would be unhelpful, but if so why? Maybe we\nshould content ourselves with adding the SHELL_ERROR boolean, and\nleave the details to whatever we print in the error message?Having a curated list of responses like \"OK\" and \"exit code N\" is also intriguing, but could be hard to unpack given psql's limited string manipulation capabilities.I think the SHELL_ERROR boolean solves most cases, but it was suggested in https://www.postgresql.org/message-id/20221102115801.GG16921@telsasoft.com that users might want more detail than that, even if that detail is unspecified and highly system dependent. \n\nAs for the test case, I'm unimpressed with it because it's so\nweak as to be nearly useless; plus it seems like a potential\nsecurity issue (what if \"nosuchcommand\" exists?). It's fine\nto have it during development, I guess, but I'm inclined to\nleave it out of the eventual commit.I have no objection to leaving it out. I think it proves that writing os-specific pg_regress tests is hard, and little else.",
"msg_date": "Fri, 3 Mar 2023 01:23:04 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 03:04:33PM -0500, Corey Huinker wrote:\n> +\n> +/*\n> + * Return the POSIX exit code (0 to 255) that corresponds to the argument.\n> + * The argument is a return code returned by wait(2) or waitpid(2), which\n> + * also applies to pclose(3) and system(3).\n> + */\n> +int\n> +wait_result_to_exit_code(int exit_status)\n> +{\n> +\tif (WIFEXITED(exit_status))\n> +\t\treturn WEXITSTATUS(exit_status);\n> +\tif (WIFSIGNALED(exit_status))\n> +\t\treturn WTERMSIG(exit_status);\n> +\treturn 0;\n> +}\n\nThis fails to distinguish between exiting with (say) code 1 and being\nkilled by signal 1.\n\n> -\t\t\tif (ferror(fd))\n> +\t\t\texit_code = ferror(fd);\n> +\t\t\tif (exit_code)\n\nAnd this adds even more ambiguity with internal library/system calls, as\nTom said.\n\n> +\t\tif (close_exit_code && !exit_code)\n> +\t\t{\n> +\t\t\terror = true;\n> +\t\t\texit_code = close_exit_code;\n> +\t\t\tif (close_exit_code == -1)\n> +\t\t\t\tpg_log_error(\"%s: %m\", cmd);\n\nI think if an error ocurrs in pclose(), then it should *always* be\nreported. Knowing that we somehow failed while running the command is\nmore important than knowing how the command ran when we had a failure\nwhile running it.\n\nNote that for some tools, a nonzero exit code can be normal. Like diff\nand grep.\n\nThe exit status is one byte. I think you should define the status\nvariable along the lines of:\n\n - 0 if successful; or,\n - a positive number 1..255 indicating its exit status. or,\n - a negative number N indicating it was terminated by signal -N; or,\n - 256 if an internal error occurred (like pclose/ferror);\n\nSee bash(1). This would be a good behavior to start with, since it\nought to be familiar to everyone, and if it's good enough to write/run\nshell scripts in, then it's got to be good enough for psql to run a\nsingle command in. I'm not sure why the shell uses 126-127 specially,\nthough..\n\nEXIT STATUS\n The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses\n fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell\n builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to\n indicate specific failure modes.\n\n For the shell's purposes, a command which exits with a zero exit status has succeeded. An exit status of zero indicates success.\n A non-zero exit status indicates failure. When a command terminates on a fatal signal N, bash uses the value of 128+N as the exit\n status.\n\n If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not exe‐\n cutable, the return status is 126.\n\n If a command fails because of an error during expansion or redirection, the exit status is greater than zero.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 8 Mar 2023 19:16:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> The exit status is one byte. I think you should define the status\n> variable along the lines of:\n\n> - 0 if successful; or,\n> - a positive number 1..255 indicating its exit status. or,\n> - a negative number N indicating it was terminated by signal -N; or,\n> - 256 if an internal error occurred (like pclose/ferror);\n\n> See bash(1). This would be a good behavior to start with, since it\n> ought to be familiar to everyone, and if it's good enough to write/run\n> shell scripts in, then it's got to be good enough for psql to run a\n> single command in.\n\nI'm okay with adopting bash's rule, but then it should actually match\nbash --- signal N is reported as 128+N, not -N.\n\nNot sure what to do about pclose/ferror cases. Maybe use -1?\n\n> I'm not sure why the shell uses 126-127 specially, though..\n\n127 is used similarly by system(3). I think both 126 and 127 might\nbe specified by POSIX, but not sure. In any case, those are outside\nour jurisdiction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:49:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'm okay with adopting bash's rule, but then it should actually match\n> bash --- signal N is reported as 128+N, not -N.\n>\n\n128+N is implemented.\n\nNonzero pclose errors of any kind will now overwrite an existing exit_code.\n\nI didn't write a formal test for the signals, but instead tested it\nmanually:\n\n[310444:16:54:32 EDT] corey=# \\! sleep 1000\n-- in another window, i found the pid of the sleep command and did a kill\n-9 on it\n[310444:16:55:15 EDT] corey=# \\echo :SHELL_EXIT_CODE\n137\n\n\n137 = 128+9, so that checks out.\n\nThe OS-specific regression test has been modified. On Windows systems it\nattempts to execute \"CON\", and on other systems it attempts to execute \"/\".\nThese are marginally better than \"nosuchcommand\" in that they should always\nexist on their host OS and could never be a legit executable. I have no\nopinion whether the test should live on past the development phase, but if\nit does not then the 0001 patch is not needed.",
"msg_date": "Fri, 17 Mar 2023 17:05:20 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "I did a look at the patch, and it seems to be in a good condition.\nIt implements declared functionality with no visible defects.\n\nAs for test, I think it is possible to implement \"signals\" case in tap\ntests. But let the actual commiter decide does it worth it or not.\n\n-- \nBest regards,\nMaxim Orlov.\n\nI did a look at the patch, and it seems to be in a good condition.It implements declared functionality with no visible defects. As for test, I think it is possible to implement \"signals\" case in tap tests. But let the actual commiter decide does it worth it or not.-- Best regards,Maxim Orlov.",
"msg_date": "Mon, 20 Mar 2023 19:13:58 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> 128+N is implemented.\n\nI think this mostly looks OK, but:\n\n* I still say there is no basis whatever for believing that the result\nof ferror() is an exit code, errno code, or anything else with\nsignificance beyond zero-or-not. Feeding it to wait_result_to_exit_code\nas you've done here is not going to do anything but mislead people in\na platform-dependent way. Probably should set exit_code to -1 if\nferror reports trouble.\n\n* Why do you have wait_result_to_exit_code defaulting to return 0\nif it doesn't recognize the code as either WIFEXITED or WIFSIGNALED?\nThat seems pretty misleading; again -1 would be a better idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 13:01:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 1:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > 128+N is implemented.\n>\n> I think this mostly looks OK, but:\n>\n> * I still say there is no basis whatever for believing that the result\n> of ferror() is an exit code, errno code, or anything else with\n> significance beyond zero-or-not. Feeding it to wait_result_to_exit_code\n> as you've done here is not going to do anything but mislead people in\n> a platform-dependent way. Probably should set exit_code to -1 if\n> ferror reports trouble.\n>\n\nAgreed. Sorry, I read your comment wrong the last time or I would have\nalready made it so.\n\n\n>\n> * Why do you have wait_result_to_exit_code defaulting to return 0\n> if it doesn't recognize the code as either WIFEXITED or WIFSIGNALED?\n> That seems pretty misleading; again -1 would be a better idea.\n>\n\nThat makes sense as well. Given that WIFSIGNALED is currently defined as\nthe negation of WIFEXITED, whatever default result we have here is\nbasically a this-should-never-happen. That might be something we want to\ncatch with an assert, but I'm fine with a -1 for now.",
"msg_date": "Mon, 20 Mar 2023 16:19:04 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> On Mon, Mar 20, 2023 at 1:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Why do you have wait_result_to_exit_code defaulting to return 0\n>> if it doesn't recognize the code as either WIFEXITED or WIFSIGNALED?\n>> That seems pretty misleading; again -1 would be a better idea.\n\n> That makes sense as well. Given that WIFSIGNALED is currently defined as\n> the negation of WIFEXITED, whatever default result we have here is\n> basically a this-should-never-happen.\n\nGood point. So we'd better have it first pass through -1 literally,\nelse pclose() or system() failure will be reported as something\nmisleading (probably signal 127?).\n\nPushed with that change, some cosmetic adjustments, and one significant\nlogic change in do_backtick: I made it do\n\n if (fd)\n {\n /*\n * Although pclose's result always sets SHELL_EXIT_CODE, we\n * historically have abandoned the backtick substitution only if it\n * returns -1.\n */\n exit_code = pclose(fd);\n if (exit_code == -1)\n {\n pg_log_error(\"%s: %m\", cmd);\n error = true;\n }\n }\n\nAs you had it, any nonzero result would prevent backtick substitution.\nI'm not really sure how much thought went into the existing behavior,\nbut I am pretty sure that it's not part of this patch's charter to\nchange that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Mar 2023 13:10:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": ">\n>\n> As you had it, any nonzero result would prevent backtick substitution.\n> I'm not really sure how much thought went into the existing behavior,\n> but I am pretty sure that it's not part of this patch's charter to\n> change that.\n>\n> regards, tom lane\n>\n\n The changes all make sense, thanks!\n\n\nAs you had it, any nonzero result would prevent backtick substitution.\nI'm not really sure how much thought went into the existing behavior,\nbut I am pretty sure that it's not part of this patch's charter to\nchange that.\n\n regards, tom lane The changes all make sense, thanks!",
"msg_date": "Tue, 21 Mar 2023 15:33:15 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 3:33 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n>\n>> As you had it, any nonzero result would prevent backtick substitution.\n>> I'm not really sure how much thought went into the existing behavior,\n>> but I am pretty sure that it's not part of this patch's charter to\n>> change that.\n>>\n>> regards, tom lane\n>>\n>\n> The changes all make sense, thanks!\n>\n\nThis is a follow up patch to apply the committed pattern to the various\npiped output commands.\n\nI spotted this oversight in the\nhttps://www.postgresql.org/message-id/CADkLM=dMG6AAWfeKvGnKOzz1O7ZNctFR1BzAA3K7-+XQxff=4Q@mail.gmail.com\nthread and, whether or not that feature gets in, we should probably apply\nit to output pipes as well.",
"msg_date": "Fri, 24 Mar 2023 17:21:27 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 5:21 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n>\n>\n> On Tue, Mar 21, 2023 at 3:33 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>>\n>>> As you had it, any nonzero result would prevent backtick substitution.\n>>> I'm not really sure how much thought went into the existing behavior,\n>>> but I am pretty sure that it's not part of this patch's charter to\n>>> change that.\n>>>\n>>> regards, tom lane\n>>>\n>>\n>> The changes all make sense, thanks!\n>>\n>\n> This is a follow up patch to apply the committed pattern to the various\n> piped output commands.\n>\n> I spotted this oversight in the\n> https://www.postgresql.org/message-id/CADkLM=dMG6AAWfeKvGnKOzz1O7ZNctFR1BzAA3K7-+XQxff=4Q@mail.gmail.com\n> thread and, whether or not that feature gets in, we should probably apply\n> it to output pipes as well.\n>\n\nFollowing up here. This addendum patch clearly isn't as important as\nanything currently trying to make it in before the feature freeze, but I\nthink the lack of setting the SHELL_ERROR and SHELL_EXIT_CODE vars on piped\ncommands would be viewed as a bug, so I'm hoping somebody can take a look\nat it.\n\nOn Fri, Mar 24, 2023 at 5:21 PM Corey Huinker <corey.huinker@gmail.com> wrote:On Tue, Mar 21, 2023 at 3:33 PM Corey Huinker <corey.huinker@gmail.com> wrote:\nAs you had it, any nonzero result would prevent backtick substitution.\nI'm not really sure how much thought went into the existing behavior,\nbut I am pretty sure that it's not part of this patch's charter to\nchange that.\n\n regards, tom lane The changes all make sense, thanks!This is a follow up patch to apply the committed pattern to the various piped output commands.I spotted this oversight in the https://www.postgresql.org/message-id/CADkLM=dMG6AAWfeKvGnKOzz1O7ZNctFR1BzAA3K7-+XQxff=4Q@mail.gmail.com thread and, whether or not that feature gets in, we should probably apply it to output pipes as well.Following up here. This addendum patch clearly isn't as important as anything currently trying to make it in before the feature freeze, but I think the lack of setting the SHELL_ERROR and SHELL_EXIT_CODE vars on piped commands would be viewed as a bug, so I'm hoping somebody can take a look at it.",
"msg_date": "Wed, 5 Apr 2023 17:29:34 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> Following up here. This addendum patch clearly isn't as important as\n> anything currently trying to make it in before the feature freeze, but I\n> think the lack of setting the SHELL_ERROR and SHELL_EXIT_CODE vars on piped\n> commands would be viewed as a bug, so I'm hoping somebody can take a look\n> at it.\n\nI changed the CF entry back to Needs Review to remind us we're not done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Apr 2023 17:49:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> This is a follow up patch to apply the committed pattern to the various\n> piped output commands.\n\nPushed with some changes:\n\n* You didn't update the documentation.\n\n* I thought this was way too many copies of the same logic. I made a\n subroutine to set these variables, and changed the original uses too.\n\n* You didn't change \\w (exec_command_write) to set these variables.\n I'm assuming that was an oversight; if it was intentional, please\n explain why.\n\nI looked through psql's other uses of pclose() and system(),\nand found:\n\t* pager invocations\n\t* backtick evaluation within a prompt\n\t* \\e (edit query buffer)\nI think that not changing these variables in those places is a good\nidea. For instance, if prompt display could change them then they'd\nnever survive long enough to be useful; plus, having the behavior\nvary depending on your prompt settings seems pretty horrid.\nIn general, these things are mainly useful to scripts, and I doubt\nthat we want their interactive behavior to vary from scripted behavior,\nso setting them during interactive-only operations seems bad.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Apr 2023 17:41:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add SHELL_EXIT_CODE to psql"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n Recently, I'm looking into the implementation of parallel index building. And I noticed that in `README.parallel` we mentioned that `we allow no writes to the database and no DDL` . But if I understand it correctly, index building as one significant DDL is now supporting parallel (though the `_bt_load` is still single processed). So I guess we should update the outdated document.\r\n\r\n The related file is here. https://github.com/postgres/postgres/blob/8c7146790811ac4eee23fab2226db14b636e1ac5/src/backend/access/transam/README.parallel#L85\r\n[https://opengraph.githubassets.com/e00d7ae4b1ecff53f42d16a5f9dabfdada039cabbfc1a703bad3634a5903d3d0/postgres/postgres]<https://github.com/postgres/postgres/blob/8c7146790811ac4eee23fab2226db14b636e1ac5/src/backend/access/transam/README.parallel#L85>\r\npostgres/README.parallel at 8c7146790811ac4eee23fab2226db14b636e1ac5 · postgres/postgres<https://github.com/postgres/postgres/blob/8c7146790811ac4eee23fab2226db14b636e1ac5/src/backend/access/transam/README.parallel#L85>\r\nMirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Subm...\r\ngithub.com\r\n\r\n\r\nBest,\r\nQinghao\r\n\n\n\n\n\n\n\n\r\nHi hackers,\n\r\n Recently, I'm looking into the implementation of parallel index building. And I noticed that in `README.parallel` we mentioned that `we allow no writes to the database and no DDL` . But if I understand it correctly, index building as one significant DDL\r\n is now supporting parallel (though the `_bt_load` is still single processed). So\r\n I guess we should update the outdated document.\n\n\n\n\n The related file is here. https://github.com/postgres/postgres/blob/8c7146790811ac4eee23fab2226db14b636e1ac5/src/backend/access/transam/README.parallel#L85\n\n\n\n\n\n\n\n\n\n\n\n\npostgres/README.parallel\r\n at 8c7146790811ac4eee23fab2226db14b636e1ac5 · postgres/postgres\n\r\nMirror of the official PostgreSQL GIT repository. Note that this is just a *mirror* - we don't work with pull requests on github. To contribute, please see https://wiki.postgresql.org/wiki/Subm...\n\r\ngithub.com\n\n\n\n\n\n\n\n\n\n\n\nBest,\r\nQinghao",
"msg_date": "Fri, 4 Nov 2022 10:43:34 +0000",
"msg_from": "qinghao huang <wfnuser@hotmail.com>",
"msg_from_op": true,
"msg_subject": "found the document `README.parallel` to be a little bit incorrect"
}
] |
[
{
"msg_contents": "I committed draft release notes at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=bc62182f0afe6b01fec45b8d26df03fc9de4599a\n\nPlease send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Nov 2022 12:47:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Draft back-branch release notes are up"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 12:47:40PM -0400, Tom Lane wrote:\n> I committed draft release notes at\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=bc62182f0afe6b01fec45b8d26df03fc9de4599a\n\n+ Fix planner failure with extended statistics on partitioned tables\n+ (Richard Guo, Justin Pryzby)\n\nThis can also happen with inheritance tables.\n\n+ Add missing guards for NULL connection pointer\n\nMaybe should be <literal>NULL or <symbol>NULL ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 4 Nov 2022 12:28:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> + Fix planner failure with extended statistics on partitioned tables\n> + (Richard Guo, Justin Pryzby)\n\n> This can also happen with inheritance tables.\n\n> + Add missing guards for NULL connection pointer\n\n> Maybe should be <literal>NULL or <symbol>NULL ?\n\nDone, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Nov 2022 11:14:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Draft back-branch release notes are up"
},
{
"msg_contents": "On 11/6/22 11:14 AM, Tom Lane wrote:\r\n> Justin Pryzby <pryzby@telsasoft.com> writes:\r\n>> + Fix planner failure with extended statistics on partitioned tables\r\n>> + (Richard Guo, Justin Pryzby)\r\n> \r\n>> This can also happen with inheritance tables.\r\n> \r\n>> + Add missing guards for NULL connection pointer\r\n> \r\n>> Maybe should be <literal>NULL or <symbol>NULL ?\r\n> \r\n> Done, thanks.\r\n\r\nHopefully not too late, but I noticed\r\n\r\n > Fix CREATE DATABASE to allow its oid parameter to exceed 231\r\n\r\nwhich should be 2^31 according to 2c6d4365\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 7 Nov 2022 10:30:24 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up"
},
{
"msg_contents": "On 11/7/22 10:30 AM, Jonathan S. Katz wrote:\r\n> On 11/6/22 11:14 AM, Tom Lane wrote:\r\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\r\n>>> + Fix planner failure with extended statistics on partitioned \r\n>>> tables\r\n>>> + (Richard Guo, Justin Pryzby)\r\n>>\r\n>>> This can also happen with inheritance tables.\r\n>>\r\n>>> + Add missing guards for NULL connection pointer\r\n>>\r\n>>> Maybe should be <literal>NULL or <symbol>NULL ?\r\n>>\r\n>> Done, thanks.\r\n> \r\n> Hopefully not too late, but I noticed\r\n> \r\n> > Fix CREATE DATABASE to allow its oid parameter to exceed 231\r\n> \r\n> which should be 2^31 according to 2c6d4365\r\n\r\nOh ignore, I had a copy-and-paste error into my plaintext editor.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 7 Nov 2022 10:34:26 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up"
}
] |
[
{
"msg_contents": "In an internal conversation it was seen that for some tests that want\nto enforce a behaviour, a behaviour that is controlled by a GUC, we\n_need_ to perform a sleep for an arbitrary amount of time.\nAlternatively, executing the rest of the test on a new connection also\nhelps to get the expected behaviour. Following is a sample snippet of\nsuch a test.\n\nALTER SYSTEM SET param TO value;\nSELECT pg_reload_conf();\n-- Either pg_sleep(0.1) or \\connect here for next command to behave as expected.\nALTER ROLE ... PASSWORD '...';\n\nThis is because of the fact that the SIGHUP, sent to Postmaster by\nthis backend, takes some time to get back to the issuing backend.\n\nAlthough a pg_sleep() call works to alleviate the pain in most cases,\nit does not provide any certainty. Following the Principle Of Least\nAstonishment, we want a command that loads the configuration\n_synchronously_, so that the subsequent commands from the client can\nbe confident that the requisite parameter changes have taken effect.\n\nThe attached patch makes the pg_reload_conf() function set\nConfigReloadPending to true, which will force the postgres main\ncommand-processing loop to process and apply config changes _before_\nexecuting the next command.\n\nThe only downside I can think of this approach is that it _might_\ncause the issuing backend to process the config file twice; once after\nit has processed the current command, and another time when the SIGHUP\nsignal comes from the Postmaster. If the SIGHUP signal from the\nPostmaster comes before the end of current command, then the issuing\nbackend will process the config only once, as before the patch. In any\ncase, this extra processing of the config seems harmless, and the\nbenefit outweighs the extra processing done.\n\nThe alternate methods that I considered (see branch reload_my_conf at\n[2]) were to either implement the SQL command RELOAD CONFIGURATION [\nFOR ALL ], or to create an overloaded version of function\npg_reload_conf(). But both those approaches had much more significant\ndownsides, like additional GRANTs, etc.\n\nThere have been issues identified in the past (see [1]) that possibly\nmay be alleviated by this approach of applying config changes\nsynchronously.\n\n[1]: https://www.postgresql.org/message-id/2138662.1623460441%40sss.pgh.pa.us\n[2]: https://github.com/gurjeet/postgres/tree/reload_my_conf\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Fri, 4 Nov 2022 10:26:38 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "pg_reload_conf() synchronously"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-04 10:26:38 -0700, Gurjeet Singh wrote:\n> The attached patch makes the pg_reload_conf() function set\n> ConfigReloadPending to true, which will force the postgres main\n> command-processing loop to process and apply config changes _before_\n> executing the next command.\n\nWorth noting that this doesn't necessarily suffice to avoid race conditions in\ntests, if the test depends on *other* backends having seen the configuration\nchanges.\n\nIt might be worth to use the global barrier mechanism to count which backends\nhave reloaded configuration and to provide a function / option to pg_sleep\nthat waits for that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 19:46:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_reload_conf() synchronously"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Worth noting that this doesn't necessarily suffice to avoid race conditions in\n> tests, if the test depends on *other* backends having seen the configuration\n> changes.\n\nTrue, but do we have any such cases?\n\n> It might be worth to use the global barrier mechanism to count which backends\n> have reloaded configuration and to provide a function / option to pg_sleep\n> that waits for that.\n\nThat ... seems like a lot of mechanism. And it could easily result\nin undetected deadlocks, if any other session is blocked on you.\nI seriously doubt that we should go there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Nov 2022 23:35:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_reload_conf() synchronously"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-04 23:35:21 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Worth noting that this doesn't necessarily suffice to avoid race conditions in\n> > tests, if the test depends on *other* backends having seen the configuration\n> > changes.\n> \n> True, but do we have any such cases?\n\nI know I hit it twice and gave up on the tests.\n\n\n> > It might be worth to use the global barrier mechanism to count which backends\n> > have reloaded configuration and to provide a function / option to pg_sleep\n> > that waits for that.\n> \n> That ... seems like a lot of mechanism. And it could easily result\n> in undetected deadlocks, if any other session is blocked on you.\n> I seriously doubt that we should go there.\n\nYea, it's not great. Probably ok enough for a test, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Nov 2022 21:51:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_reload_conf() synchronously"
},
{
"msg_contents": "On Fri, Nov 04, 2022 at 09:51:08PM -0700, Andres Freund wrote:\n> On 2022-11-04 23:35:21 -0400, Tom Lane wrote:\n>> True, but do we have any such cases?\n> \n> I know I hit it twice and gave up on the tests.\n\nStill, is there any need for a full-blown facility for the case of an\nindividual backend? Using a new function to track that all the\nchanges are in effect is useful for isolation tests, less for single\nsessions where a function to wait for all the backends is no different\nthan a \\c to enforce a reload because both tests would need an extra\nstep (on top of making parallel tests longer if something does a long\ntransaction?).\n\nAs far as I know, Gurjeet has been annoyed only with non-user-settable\nGUCs for one connection (correct me of course), there was nothing\nfancy with isolation tests, yet. Not saying that this is useless for\nisolation tests, this would have its cases for assumptions where\nmultiple GUCs need to be synced across multiple sessions, but it seems\nto me that we have two different cases in need of two different\nsolutions.\n--\nMichael",
"msg_date": "Sat, 5 Nov 2022 14:26:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_reload_conf() synchronously"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-05 14:26:44 +0900, Michael Paquier wrote:\n> On Fri, Nov 04, 2022 at 09:51:08PM -0700, Andres Freund wrote:\n> > On 2022-11-04 23:35:21 -0400, Tom Lane wrote:\n> >> True, but do we have any such cases?\n> > \n> > I know I hit it twice and gave up on the tests.\n> \n> Still, is there any need for a full-blown facility for the case of an\n> individual backend?\n\nNo, and I didn't claim it was.\n\n\n> Using a new function to track that all the changes are in effect is useful\n> for isolation tests\n\nI hit it in tap tests, fwiw.\n\n\n> As far as I know, Gurjeet has been annoyed only with non-user-settable\n> GUCs for one connection (correct me of course), there was nothing\n> fancy with isolation tests, yet. Not saying that this is useless for\n> isolation tests, this would have its cases for assumptions where\n> multiple GUCs need to be synced across multiple sessions, but it seems\n> to me that we have two different cases in need of two different\n> solutions.\n\nI didn't say we need to go for something more complete. Just that it's worth\nthinking about.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Nov 2022 11:22:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_reload_conf() synchronously"
},
{
"msg_contents": "On Sat, Nov 5, 2022 at 11:23 AM Andres Freund <andres@anarazel.de> wrote:\n> > As far as I know, Gurjeet has been annoyed only with non-user-settable\n> > GUCs for one connection (correct me of course), there was nothing\n> > fancy with isolation tests, yet. Not saying that this is useless for\n> > isolation tests, this would have its cases for assumptions where\n> > multiple GUCs need to be synced across multiple sessions, but it seems\n> > to me that we have two different cases in need of two different\n> > solutions.\n>\n> I didn't say we need to go for something more complete. Just that it's worth\n> thinking about.\n\nFWIW, I have considered a few different approaches, but all of them\nwere not only more work, they were fragile, and diffcult to prove\ncorrectness of. For example, I thought of implementing DSM based\nsynchronisation between processes, so that the invoking backend can be\nsure that _all_ children of Postmaster have processed the reload. But\nthat will run into problems as different backends get created, and as\nthey exit.\n\nThe invoking client may be interested in just client-facing backends\nhonoring the reload before moving on, so it would have to wait until\nall the other backends finish their current command and return to the\nmain loop; but that may never happen, because one of the backends may\nbe processing a long-running query. Or, for some reason, the the\ncaller may be interested in only the autovacuum proccesses honoring\nits reload request. So I thought about creating _classes_ of backends:\nclient-facing, other always-on children of Postmaster, BGWorkers, etc.\nBut that just makes the problem more difficult to solve.\n\nI hadn't considered the possibilty of deadlocks that Tom raised, so\nthat's another downside of the other approaches.\n\nThe proposed patch solves a specific problem, that of a single backend\nreloading conf before the next command, without adding any complexity\nfor any other cases. It sure is worth solving the case where a backend\nwaits for another backed(s) to process the conf files, but it will\nhave to be a separate solution, becuase it's much more difficult to\nget it right.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sat, 5 Nov 2022 12:03:49 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: pg_reload_conf() synchronously"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-05 12:03:49 -0700, Gurjeet Singh wrote:\n> For example, I thought of implementing DSM based synchronisation between\n> processes, so that the invoking backend can be sure that _all_ children of\n> Postmaster have processed the reload. But that will run into problems as\n> different backends get created, and as they exit.\n\nIf you just want something like that you really can just use the global\nbarrier mechanism. The hard part is how to deal with situations like two\nbackends waiting at the same time. Possibly the best way would be to not\nactually offer a blocking API but just a way to ask whether a change has been\nprocessed everywhere, and require explicit polling on the client side.\n\n\n> The proposed patch solves a specific problem, that of a single backend\n> reloading conf before the next command, without adding any complexity\n> for any other cases.\n\nI'm not sure that's true btw - I seem to recall that there's code somewhere\nnoting that it relies on postmaster being the first to process a config\nchange. Which wouldn't be true with this change anymore. I remember not being\nconvinced by that logic, because successive config changes can still lead to\nbackends processing the config file first.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Nov 2022 12:26:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_reload_conf() synchronously"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I played with regression tests for pg_restore, I tested -T filtering\ntriggers too. I had problems with restoring triggers. I found that the name\nfor trigger uses the pattern \"tablename triggername\" (not just (and\ncorrect) triggername).\n\nI propose to generate tag just like trigger name\n\nproposed patch attached\n\nregards\n\nPavel",
"msg_date": "Sun, 6 Nov 2022 06:41:14 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "bug: pg_dump use strange tag for trigger"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I played with regression tests for pg_restore, I tested -T filtering\n> triggers too. I had problems with restoring triggers. I found that the name\n> for trigger uses the pattern \"tablename triggername\" (not just (and\n> correct) triggername).\n\n> I propose to generate tag just like trigger name\n\nTrigger names by themselves aren't even a little bit unique, so that\ndoesn't seem like a great idea to me. There's backwards compatibility\nto worry about, too. Maybe we need a documentation adjustment, instead?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 06 Nov 2022 09:52:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug: pg_dump use strange tag for trigger"
},
{
"msg_contents": "ne 6. 11. 2022 v 15:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > When I played with regression tests for pg_restore, I tested -T filtering\n> > triggers too. I had problems with restoring triggers. I found that the\n> name\n> > for trigger uses the pattern \"tablename triggername\" (not just (and\n> > correct) triggername).\n>\n> > I propose to generate tag just like trigger name\n>\n> Trigger names by themselves aren't even a little bit unique, so that\n> doesn't seem like a great idea to me. There's backwards compatibility\n> to worry about, too. Maybe we need a documentation adjustment, instead?\n>\n\nI understand, but the space is a little bit non intuitive. Maybe use dot\nthere and better documentation.\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nne 6. 11. 2022 v 15:52 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> When I played with regression tests for pg_restore, I tested -T filtering\n> triggers too. I had problems with restoring triggers. I found that the name\n> for trigger uses the pattern \"tablename triggername\" (not just (and\n> correct) triggername).\n\n> I propose to generate tag just like trigger name\n\nTrigger names by themselves aren't even a little bit unique, so that\ndoesn't seem like a great idea to me. There's backwards compatibility\nto worry about, too. Maybe we need a documentation adjustment, instead?I understand, but the space is a little bit non intuitive. Maybe use dot there and better documentation.RegardsPavel \n\n regards, tom lane",
"msg_date": "Sun, 6 Nov 2022 17:14:46 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: bug: pg_dump use strange tag for trigger"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.